Test Report: Hyperkit_macOS 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (24/219)

x
+
TestOffline (195.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-273000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-273000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.844662484s)

                                                
                                                
-- stdout --
	* [offline-docker-273000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-273000" primary control-plane node in "offline-docker-273000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-273000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:16.236739   14010 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:16.237083   14010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:16.237094   14010 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:16.237100   14010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:16.237312   14010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:37:16.238959   14010 out.go:352] Setting JSON to false
	I0906 12:37:16.263447   14010 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13007,"bootTime":1725638429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:37:16.263559   14010 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:16.287678   14010 out.go:177] * [offline-docker-273000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:37:16.330128   14010 notify.go:220] Checking for updates...
	I0906 12:37:16.359860   14010 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:16.428989   14010 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:37:16.450968   14010 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:37:16.471744   14010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:16.499877   14010 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:37:16.520750   14010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:37:16.542203   14010 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:16.570901   14010 out.go:177] * Using the hyperkit driver based on user configuration
	I0906 12:37:16.612737   14010 start.go:297] selected driver: hyperkit
	I0906 12:37:16.612750   14010 start.go:901] validating driver "hyperkit" against <nil>
	I0906 12:37:16.612762   14010 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:16.615455   14010 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:16.615571   14010 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:37:16.624045   14010 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:37:16.627564   14010 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:37:16.627587   14010 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:37:16.627618   14010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:37:16.627825   14010 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:37:16.627889   14010 cni.go:84] Creating CNI manager for ""
	I0906 12:37:16.627906   14010 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:16.627914   14010 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:37:16.627990   14010 start.go:340] cluster config:
	{Name:offline-docker-273000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-273000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:16.628077   14010 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:16.696831   14010 out.go:177] * Starting "offline-docker-273000" primary control-plane node in "offline-docker-273000" cluster
	I0906 12:37:16.717891   14010 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:16.717958   14010 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:37:16.717979   14010 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:16.718155   14010 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:37:16.718168   14010 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:16.718441   14010 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/offline-docker-273000/config.json ...
	I0906 12:37:16.718470   14010 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/offline-docker-273000/config.json: {Name:mkd86654430baf342335604e4313ab808af56497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:37:16.718855   14010 start.go:360] acquireMachinesLock for offline-docker-273000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:37:16.718918   14010 start.go:364] duration metric: took 46.538µs to acquireMachinesLock for "offline-docker-273000"
	I0906 12:37:16.718940   14010 start.go:93] Provisioning new machine with config: &{Name:offline-docker-273000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-d
ocker-273000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:37:16.719008   14010 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:37:16.739929   14010 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:37:16.740091   14010 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:37:16.740132   14010 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:37:16.749148   14010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58079
	I0906 12:37:16.749531   14010 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:37:16.750082   14010 main.go:141] libmachine: Using API Version  1
	I0906 12:37:16.750097   14010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:37:16.750327   14010 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:37:16.750438   14010 main.go:141] libmachine: (offline-docker-273000) Calling .GetMachineName
	I0906 12:37:16.750570   14010 main.go:141] libmachine: (offline-docker-273000) Calling .DriverName
	I0906 12:37:16.750702   14010 start.go:159] libmachine.API.Create for "offline-docker-273000" (driver="hyperkit")
	I0906 12:37:16.750737   14010 client.go:168] LocalClient.Create starting
	I0906 12:37:16.750783   14010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:37:16.750845   14010 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:16.750862   14010 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:16.750946   14010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:37:16.750986   14010 main.go:141] libmachine: Decoding PEM data...
	I0906 12:37:16.750998   14010 main.go:141] libmachine: Parsing certificate...
	I0906 12:37:16.751010   14010 main.go:141] libmachine: Running pre-create checks...
	I0906 12:37:16.751020   14010 main.go:141] libmachine: (offline-docker-273000) Calling .PreCreateCheck
	I0906 12:37:16.751104   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:16.751250   14010 main.go:141] libmachine: (offline-docker-273000) Calling .GetConfigRaw
	I0906 12:37:16.760806   14010 main.go:141] libmachine: Creating machine...
	I0906 12:37:16.760818   14010 main.go:141] libmachine: (offline-docker-273000) Calling .Create
	I0906 12:37:16.760933   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:16.761069   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:37:16.760928   14031 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:37:16.761129   14010 main.go:141] libmachine: (offline-docker-273000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:37:17.231129   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:37:17.231043   14031 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/id_rsa...
	I0906 12:37:17.342006   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:37:17.341894   14031 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/offline-docker-273000.rawdisk...
	I0906 12:37:17.342025   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Writing magic tar header
	I0906 12:37:17.342040   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Writing SSH key tar header
	I0906 12:37:17.342617   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:37:17.342506   14031 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000 ...
	I0906 12:37:17.816859   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:17.816898   14010 main.go:141] libmachine: (offline-docker-273000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/hyperkit.pid
	I0906 12:37:17.816931   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Using UUID 8c938373-7ab5-414a-a6c3-3d7d3bcac9b0
	I0906 12:37:18.035286   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Generated MAC de:b1:1f:79:fb:4e
	I0906 12:37:18.035307   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-273000
	I0906 12:37:18.035338   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8c938373-7ab5-414a-a6c3-3d7d3bcac9b0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041e1e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0906 12:37:18.035372   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8c938373-7ab5-414a-a6c3-3d7d3bcac9b0", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00041e1e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0906 12:37:18.035417   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8c938373-7ab5-414a-a6c3-3d7d3bcac9b0", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/offline-docker-273000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage,
/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-273000"}
	I0906 12:37:18.035464   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8c938373-7ab5-414a-a6c3-3d7d3bcac9b0 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/offline-docker-273000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machi
nes/offline-docker-273000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-273000"
	I0906 12:37:18.035481   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:37:18.038609   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 DEBUG: hyperkit: Pid is 14055
	I0906 12:37:18.039082   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 0
	I0906 12:37:18.039099   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:18.039148   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:18.040074   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:18.040190   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:18.040200   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:18.040210   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:18.040217   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:18.040224   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:18.040230   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:18.040238   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:18.040245   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:18.040251   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:18.040258   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:18.040265   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:18.040274   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:18.040310   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:18.040329   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:18.040348   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:18.040363   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:18.040383   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:18.040431   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:18.040448   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:18.040471   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:18.040485   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:18.040494   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:18.040506   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:18.040518   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:18.040527   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:18.040544   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:18.040553   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:18.040561   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:18.040571   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:18.040579   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:18.040596   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:18.040625   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:18.040636   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:18.040647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:18.040662   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:18.040672   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:18.040681   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:18.040688   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:18.046608   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:37:18.099639   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:37:18.118042   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:37:18.118078   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:37:18.118093   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:37:18.118107   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:37:18.498449   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:37:18.498465   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:37:18.613377   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:37:18.613419   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:37:18.613441   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:37:18.613456   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:37:18.614269   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:37:18.614279   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:37:20.042233   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 1
	I0906 12:37:20.042248   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:20.042362   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:20.043202   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:20.043266   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:20.043277   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:20.043290   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:20.043297   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:20.043305   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:20.043313   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:20.043325   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:20.043334   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:20.043342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:20.043347   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:20.043362   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:20.043373   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:20.043383   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:20.043393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:20.043402   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:20.043411   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:20.043418   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:20.043429   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:20.043443   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:20.043452   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:20.043458   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:20.043464   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:20.043471   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:20.043483   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:20.043503   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:20.043515   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:20.043523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:20.043532   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:20.043540   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:20.043547   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:20.043560   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:20.043572   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:20.043580   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:20.043588   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:20.043596   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:20.043602   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:20.043608   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:20.043619   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:22.044737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 2
	I0906 12:37:22.044754   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:22.044826   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:22.045611   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:22.045680   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:22.045690   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:22.045709   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:22.045715   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:22.045722   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:22.045728   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:22.045736   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:22.045742   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:22.045749   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:22.045763   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:22.045771   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:22.045779   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:22.045800   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:22.045809   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:22.045817   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:22.045828   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:22.045841   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:22.045850   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:22.045857   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:22.045865   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:22.045872   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:22.045883   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:22.045891   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:22.045901   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:22.045908   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:22.045915   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:22.045923   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:22.045928   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:22.045943   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:22.045956   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:22.045967   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:22.045983   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:22.045990   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:22.046004   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:22.046015   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:22.046022   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:22.046032   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:22.046042   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:24.035490   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:24 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:37:24.035673   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:24 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:37:24.035683   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:24 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:37:24.046741   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 3
	I0906 12:37:24.046752   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:24.046833   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:24.047649   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:24.047738   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:24.047747   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:24.047756   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:24.047762   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:24.047773   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:24.047782   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:24.047791   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:24.047799   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:24.047807   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:24.047815   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:24.047824   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:24.047831   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:24.047840   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:24.047848   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:24.047855   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:24.047862   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:24.047874   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:24.047887   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:24.047895   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:24.047903   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:24.047918   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:24.047932   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:24.047940   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:24.047948   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:24.047956   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:24.047963   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:24.047970   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:24.047984   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:24.047991   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:24.047999   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:24.048014   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:24.048022   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:24.048030   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:24.048038   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:24.048044   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:24.048052   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:24.048059   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:24.048067   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:24.055083   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:37:24 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:37:26.048601   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 4
	I0906 12:37:26.048617   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:26.048710   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:26.049497   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:26.049591   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:26.049601   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:26.049634   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:26.049648   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:26.049669   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:26.049689   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:26.049706   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:26.049713   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:26.049721   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:26.049729   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:26.049746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:26.049761   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:26.049776   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:26.049791   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:26.049804   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:26.049817   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:26.049826   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:26.049831   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:26.049848   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:26.049857   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:26.049865   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:26.049871   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:26.049884   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:26.049898   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:26.049907   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:26.049915   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:26.049922   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:26.049928   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:26.049934   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:26.049941   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:26.049949   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:26.049956   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:26.049963   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:26.049970   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:26.049986   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:26.049996   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:26.050003   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:26.050009   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:28.051854   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 5
	I0906 12:37:28.051876   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:28.051918   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:28.052704   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:28.052778   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:28.052799   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:28.052808   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:28.052814   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:28.052832   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:28.052842   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:28.052849   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:28.052855   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:28.052861   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:28.052869   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:28.052877   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:28.052883   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:28.052891   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:28.052898   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:28.052909   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:28.052933   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:28.052946   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:28.052956   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:28.052968   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:28.052977   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:28.052988   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:28.052998   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:28.053014   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:28.053021   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:28.053027   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:28.053035   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:28.053046   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:28.053054   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:28.053063   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:28.053075   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:28.053084   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:28.053108   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:28.053121   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:28.053129   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:28.053138   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:28.053146   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:28.053152   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:28.053162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:30.053618   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 6
	I0906 12:37:30.053632   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:30.053718   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:30.054535   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:30.054615   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:30.054631   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:30.054645   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:30.054655   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:30.054689   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:30.054706   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:30.054719   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:30.054730   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:30.054738   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:30.054746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:30.054755   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:30.054769   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:30.054778   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:30.054787   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:30.054796   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:30.054811   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:30.054826   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:30.054844   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:30.054854   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:30.054863   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:30.054873   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:30.054881   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:30.054889   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:30.054896   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:30.054908   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:30.054918   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:30.054925   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:30.054932   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:30.054941   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:30.054950   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:30.054973   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:30.054983   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:30.054991   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:30.055004   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:30.055014   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:30.055022   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:30.055030   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:30.055039   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:32.056507   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 7
	I0906 12:37:32.056521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:32.056602   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:32.057372   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:32.057423   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:32.057438   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:32.057447   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:32.057478   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:32.057489   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:32.057515   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:32.057531   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:32.057539   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:32.057551   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:32.057560   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:32.057568   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:32.057576   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:32.057584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:32.057600   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:32.057610   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:32.057626   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:32.057638   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:32.057650   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:32.057673   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:32.057690   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:32.057699   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:32.057707   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:32.057730   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:32.057740   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:32.057747   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:32.057753   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:32.057760   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:32.057773   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:32.057784   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:32.057791   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:32.057797   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:32.057806   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:32.057817   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:32.057839   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:32.057852   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:32.057859   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:32.057866   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:32.057874   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:34.059386   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 8
	I0906 12:37:34.059402   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:34.059415   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:34.060253   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:34.060339   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:34.060349   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:34.060359   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:34.060370   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:34.060398   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:34.060414   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:34.060427   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:34.060438   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:34.060449   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:34.060456   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:34.060466   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:34.060478   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:34.060485   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:34.060495   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:34.060509   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:34.060518   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:34.060526   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:34.060534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:34.060542   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:34.060548   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:34.060557   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:34.060575   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:34.060587   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:34.060606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:34.060624   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:34.060634   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:34.060653   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:34.060663   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:34.060672   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:34.060681   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:34.060696   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:34.060708   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:34.060723   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:34.060736   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:34.060745   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:34.060751   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:34.060769   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:34.060781   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:36.062045   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 9
	I0906 12:37:36.062064   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:36.062128   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:36.062913   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:36.062960   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:36.062975   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:36.062988   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:36.062997   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:36.063008   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:36.063015   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:36.063027   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:36.063036   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:36.063045   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:36.063051   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:36.063063   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:36.063072   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:36.063080   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:36.063088   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:36.063105   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:36.063117   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:36.063126   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:36.063137   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:36.063145   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:36.063154   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:36.063161   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:36.063170   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:36.063177   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:36.063183   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:36.063198   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:36.063211   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:36.063219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:36.063227   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:36.063234   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:36.063243   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:36.063250   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:36.063258   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:36.063265   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:36.063274   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:36.063281   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:36.063299   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:36.063317   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:36.063328   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:38.064985   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 10
	I0906 12:37:38.064996   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:38.065052   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:38.065844   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:38.065889   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:38.065897   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:38.065908   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:38.065917   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:38.065933   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:38.065940   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:38.065950   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:38.065959   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:38.065979   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:38.065990   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:38.065999   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:38.066008   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:38.066016   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:38.066023   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:38.066031   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:38.066038   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:38.066051   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:38.066059   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:38.066066   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:38.066074   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:38.066081   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:38.066087   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:38.066093   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:38.066100   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:38.066106   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:38.066122   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:38.066134   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:38.066149   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:38.066157   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:38.066166   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:38.066173   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:38.066183   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:38.066192   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:38.066201   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:38.066214   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:38.066223   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:38.066232   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:38.066240   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:40.066524   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 11
	I0906 12:37:40.066537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:40.066591   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:40.067351   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:40.067419   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:40.067427   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:40.067436   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:40.067446   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:40.067456   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:40.067462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:40.067474   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:40.067486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:40.067494   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:40.067500   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:40.067506   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:40.067514   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:40.067523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:40.067539   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:40.067553   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:40.067561   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:40.067570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:40.067577   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:40.067585   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:40.067592   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:40.067601   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:40.067610   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:40.067617   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:40.067625   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:40.067639   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:40.067648   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:40.067656   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:40.067664   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:40.067682   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:40.067696   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:40.067707   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:40.067713   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:40.067721   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:40.067727   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:40.067737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:40.067746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:40.067753   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:40.067761   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:42.069647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 12
	I0906 12:37:42.069662   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:42.069705   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:42.070497   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:42.070534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:42.070542   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:42.070550   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:42.070561   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:42.070573   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:42.070590   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:42.070598   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:42.070604   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:42.070623   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:42.070629   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:42.070641   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:42.070648   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:42.070655   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:42.070661   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:42.070668   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:42.070676   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:42.070683   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:42.070689   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:42.070709   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:42.070726   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:42.070747   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:42.070757   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:42.070764   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:42.070779   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:42.070791   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:42.070804   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:42.070816   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:42.070825   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:42.070837   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:42.070846   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:42.070855   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:42.070861   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:42.070868   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:42.070876   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:42.070883   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:42.070891   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:42.070906   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:42.070925   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:44.071427   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 13
	I0906 12:37:44.071440   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:44.071495   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:44.072269   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:44.072323   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:44.072331   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:44.072341   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:44.072350   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:44.072364   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:44.072370   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:44.072377   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:44.072386   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:44.072393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:44.072399   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:44.072416   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:44.072439   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:44.072447   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:44.072455   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:44.072462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:44.072467   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:44.072473   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:44.072479   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:44.072485   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:44.072493   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:44.072500   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:44.072507   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:44.072515   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:44.072523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:44.072529   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:44.072536   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:44.072544   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:44.072551   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:44.072559   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:44.072577   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:44.072594   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:44.072606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:44.072614   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:44.072622   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:44.072629   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:44.072636   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:44.072646   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:44.072662   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:46.072776   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 14
	I0906 12:37:46.072789   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:46.072868   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:46.073618   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:46.073692   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:46.073711   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:46.073720   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:46.073726   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:46.073733   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:46.073739   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:46.073747   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:46.073753   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:46.073759   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:46.073766   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:46.073781   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:46.073787   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:46.073794   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:46.073803   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:46.073811   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:46.073818   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:46.073824   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:46.073832   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:46.073850   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:46.073863   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:46.073878   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:46.073890   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:46.073905   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:46.073926   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:46.073942   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:46.073952   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:46.073961   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:46.073973   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:46.073980   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:46.073988   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:46.073998   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:46.074016   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:46.074024   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:46.074031   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:46.074038   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:46.074044   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:46.074051   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:46.074061   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:48.074595   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 15
	I0906 12:37:48.074609   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:48.074647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:48.075417   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:48.075492   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:48.075502   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:48.075511   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:48.075518   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:48.075530   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:48.075538   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:48.075545   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:48.075552   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:48.075558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:48.075564   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:48.075571   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:48.075583   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:48.075593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:48.075600   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:48.075609   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:48.075618   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:48.075627   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:48.075634   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:48.075651   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:48.075665   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:48.075678   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:48.075692   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:48.075710   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:48.075719   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:48.075726   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:48.075734   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:48.075741   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:48.075752   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:48.075771   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:48.075784   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:48.075792   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:48.075801   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:48.075808   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:48.075815   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:48.075822   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:48.075828   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:48.075837   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:48.075846   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:50.076447   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 16
	I0906 12:37:50.076462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:50.076527   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:50.077281   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:50.077352   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:50.077364   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:50.077373   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:50.077380   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:50.077386   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:50.077392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:50.077398   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:50.077414   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:50.077423   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:50.077430   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:50.077436   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:50.077443   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:50.077451   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:50.077470   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:50.077479   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:50.077486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:50.077499   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:50.077508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:50.077517   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:50.077525   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:50.077532   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:50.077539   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:50.077547   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:50.077561   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:50.077573   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:50.077582   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:50.077591   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:50.077597   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:50.077603   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:50.077617   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:50.077629   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:50.077637   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:50.077645   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:50.077660   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:50.077668   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:50.077676   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:50.077683   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:50.077691   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:52.079142   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 17
	I0906 12:37:52.079157   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:52.079237   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:52.079996   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:52.080051   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:52.080061   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:52.080078   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:52.080088   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:52.080106   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:52.080118   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:52.080127   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:52.080134   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:52.080141   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:52.080147   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:52.080154   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:52.080160   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:52.080176   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:52.080185   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:52.080201   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:52.080212   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:52.080219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:52.080226   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:52.080240   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:52.080251   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:52.080264   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:52.080273   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:52.080281   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:52.080287   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:52.080293   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:52.080301   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:52.080307   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:52.080313   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:52.080323   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:52.080330   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:52.080337   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:52.080344   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:52.080352   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:52.080359   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:52.080366   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:52.080374   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:52.080381   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:52.080396   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:54.082283   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 18
	I0906 12:37:54.082297   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:54.082336   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:54.083119   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:54.083168   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:54.083176   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:54.083195   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:54.083205   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:54.083224   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:54.083231   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:54.083237   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:54.083243   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:54.083256   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:54.083270   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:54.083278   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:54.083286   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:54.083302   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:54.083314   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:54.083324   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:54.083332   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:54.083342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:54.083351   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:54.083360   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:54.083367   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:54.083378   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:54.083389   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:54.083397   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:54.083404   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:54.083419   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:54.083428   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:54.083435   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:54.083440   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:54.083446   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:54.083453   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:54.083464   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:54.083477   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:54.083485   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:54.083493   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:54.083500   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:54.083508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:54.083515   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:54.083523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:56.083385   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 19
	I0906 12:37:56.083398   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:56.083482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:56.084219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:56.084292   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:56.084303   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:56.084312   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:56.084321   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:56.084329   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:56.084335   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:56.084342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:56.084348   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:56.084355   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:56.084363   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:56.084373   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:56.084379   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:56.084387   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:56.084394   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:56.084401   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:56.084407   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:56.084414   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:56.084422   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:56.084428   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:56.084440   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:56.084449   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:56.084456   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:56.084464   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:56.084478   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:56.084490   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:56.084499   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:56.084515   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:56.084523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:56.084536   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:56.084548   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:56.084559   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:56.084568   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:56.084576   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:56.084583   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:56.084593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:56.084602   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:56.084609   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:56.084617   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:37:58.086465   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 20
	I0906 12:37:58.086481   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:37:58.086523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:37:58.087334   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:37:58.087383   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:37:58.087395   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:37:58.087412   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:37:58.087419   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:37:58.087428   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:37:58.087434   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:37:58.087441   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:37:58.087448   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:37:58.087454   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:37:58.087461   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:37:58.087468   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:37:58.087474   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:37:58.087481   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:37:58.087487   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:37:58.087493   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:37:58.087509   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:37:58.087522   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:37:58.087548   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:37:58.087562   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:37:58.087570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:37:58.087578   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:37:58.087590   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:37:58.087598   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:37:58.087612   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:37:58.087620   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:37:58.087629   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:37:58.087639   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:37:58.087647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:37:58.087654   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:37:58.087662   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:37:58.087670   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:37:58.087677   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:37:58.087684   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:37:58.087692   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:37:58.087700   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:37:58.087717   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:37:58.087729   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:37:58.087747   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:00.088013   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 21
	I0906 12:38:00.088029   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:00.088105   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:00.088863   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:00.088940   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:00.088952   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:00.088963   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:00.088970   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:00.088982   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:00.088989   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:00.088996   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:00.089018   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:00.089026   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:00.089037   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:00.089044   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:00.089058   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:00.089074   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:00.089087   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:00.089103   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:00.089112   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:00.089120   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:00.089129   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:00.089136   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:00.089144   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:00.089152   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:00.089158   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:00.089174   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:00.089187   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:00.089195   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:00.089202   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:00.089209   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:00.089218   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:00.089225   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:00.089231   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:00.089237   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:00.089247   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:00.089261   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:00.089275   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:00.089284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:00.089292   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:00.089299   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:00.089314   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:02.091067   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 22
	I0906 12:38:02.091080   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:02.091150   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:02.091934   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:02.091998   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:02.092005   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:02.092013   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:02.092024   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:02.092035   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:02.092042   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:02.092049   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:02.092056   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:02.092064   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:02.092071   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:02.092079   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:02.092086   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:02.092093   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:02.092099   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:02.092116   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:02.092124   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:02.092132   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:02.092140   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:02.092147   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:02.092158   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:02.092168   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:02.092174   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:02.092181   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:02.092189   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:02.092206   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:02.092219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:02.092227   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:02.092235   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:02.092246   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:02.092256   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:02.092265   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:02.092271   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:02.092290   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:02.092304   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:02.092313   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:02.092337   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:02.092346   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:02.092354   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:04.094225   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 23
	I0906 12:38:04.094239   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:04.094294   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:04.095085   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:04.095132   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:04.095141   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:04.095152   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:04.095167   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:04.095175   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:04.095183   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:04.095195   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:04.095202   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:04.095209   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:04.095217   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:04.095229   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:04.095237   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:04.095243   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:04.095249   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:04.095270   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:04.095285   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:04.095293   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:04.095301   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:04.095309   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:04.095317   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:04.095336   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:04.095343   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:04.095351   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:04.095358   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:04.095366   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:04.095373   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:04.095380   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:04.095388   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:04.095395   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:04.095403   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:04.095410   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:04.095420   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:04.095432   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:04.095440   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:04.095447   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:04.095454   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:04.095462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:04.095469   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:06.097337   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 24
	I0906 12:38:06.097354   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:06.097412   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:06.098172   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:06.098235   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:06.098265   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:06.098278   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:06.098287   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:06.098296   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:06.098305   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:06.098318   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:06.098325   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:06.098346   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:06.098354   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:06.098361   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:06.098370   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:06.098378   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:06.098385   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:06.098392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:06.098400   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:06.098416   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:06.098428   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:06.098437   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:06.098445   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:06.098452   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:06.098461   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:06.098472   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:06.098482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:06.098492   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:06.098501   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:06.098508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:06.098517   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:06.098534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:06.098546   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:06.098555   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:06.098562   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:06.098579   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:06.098590   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:06.098599   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:06.098605   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:06.098612   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:06.098621   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:08.100480   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 25
	I0906 12:38:08.100494   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:08.100537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:08.101291   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:08.101362   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:08.101375   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:08.101392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:08.101403   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:08.101415   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:08.101421   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:08.101429   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:08.101436   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:08.101445   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:08.101452   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:08.101468   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:08.101482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:08.101494   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:08.101505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:08.101514   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:08.101523   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:08.101532   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:08.101544   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:08.101558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:08.101574   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:08.101584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:08.101592   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:08.101600   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:08.101606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:08.101615   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:08.101652   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:08.101667   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:08.101677   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:08.101685   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:08.101692   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:08.101702   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:08.101710   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:08.101718   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:08.101725   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:08.101731   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:08.101737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:08.101745   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:08.101753   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:10.103584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 26
	I0906 12:38:10.103599   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:10.103643   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:10.104408   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:10.104471   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:10.104482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:10.104498   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:10.104507   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:10.104514   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:10.104521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:10.104532   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:10.104541   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:10.104550   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:10.104560   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:10.104571   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:10.104577   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:10.104584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:10.104592   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:10.104608   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:10.104616   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:10.104623   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:10.104632   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:10.104639   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:10.104647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:10.104654   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:10.104661   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:10.104669   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:10.104677   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:10.104699   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:10.104712   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:10.104720   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:10.104729   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:10.104737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:10.104746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:10.104753   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:10.104761   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:10.104768   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:10.104788   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:10.104802   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:10.104812   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:10.104820   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:10.104833   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:12.105420   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 27
	I0906 12:38:12.105433   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:12.105501   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:12.106266   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:12.106340   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:12.106349   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:12.106366   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:12.106374   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:12.106382   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:12.106393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:12.106417   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:12.106430   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:12.106444   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:12.106450   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:12.106463   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:12.106472   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:12.106479   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:12.106498   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:12.106520   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:12.106530   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:12.106537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:12.106546   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:12.106553   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:12.106564   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:12.106572   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:12.106577   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:12.106593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:12.106606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:12.106615   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:12.106624   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:12.106631   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:12.106641   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:12.106650   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:12.106658   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:12.106665   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:12.106678   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:12.106685   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:12.106693   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:12.106700   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:12.106716   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:12.106724   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:12.106732   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:14.107570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 28
	I0906 12:38:14.107584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:14.107651   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:14.108417   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:14.108483   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:14.108491   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:14.108501   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:14.108508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:14.108524   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:14.108531   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:14.108542   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:14.108551   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:14.108560   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:14.108574   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:14.108584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:14.108593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:14.108602   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:14.108616   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:14.108627   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:14.108634   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:14.108642   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:14.108657   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:14.108677   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:14.108685   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:14.108693   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:14.108701   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:14.108709   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:14.108716   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:14.108730   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:14.108742   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:14.108750   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:14.108760   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:14.108771   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:14.108780   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:14.108788   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:14.108799   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:14.108807   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:14.108814   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:14.108820   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:14.108832   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:14.108845   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:14.108854   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:16.109196   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 29
	I0906 12:38:16.109208   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:16.109284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:16.110067   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for de:b1:1f:79:fb:4e in /var/db/dhcpd_leases ...
	I0906 12:38:16.110123   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:16.110138   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:16.110152   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:16.110159   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:16.110177   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:16.110184   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:16.110190   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:16.110201   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:16.110211   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:16.110219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:16.110225   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:16.110236   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:16.110241   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:16.110248   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:16.110253   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:16.110259   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:16.110268   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:16.110276   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:16.110283   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:16.110288   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:16.110311   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:16.110323   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:16.110330   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:16.110342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:16.110349   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:16.110356   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:16.110363   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:16.110368   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:16.110374   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:16.110380   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:16.110385   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:16.110392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:16.110397   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:16.110402   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:16.110419   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:16.110432   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:16.110442   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:16.110450   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:18.111870   14010 client.go:171] duration metric: took 1m1.361604043s to LocalClient.Create
	I0906 12:38:20.112741   14010 start.go:128] duration metric: took 1m3.394221209s to createHost
	I0906 12:38:20.112757   14010 start.go:83] releasing machines lock for "offline-docker-273000", held for 1m3.394329387s
	W0906 12:38:20.112773   14010 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:b1:1f:79:fb:4e
	I0906 12:38:20.113114   14010 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:38:20.113142   14010 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:38:20.121980   14010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58109
	I0906 12:38:20.122322   14010 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:38:20.122661   14010 main.go:141] libmachine: Using API Version  1
	I0906 12:38:20.122688   14010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:38:20.122930   14010 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:38:20.123312   14010 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:38:20.123336   14010 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:38:20.131734   14010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58111
	I0906 12:38:20.132060   14010 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:38:20.132416   14010 main.go:141] libmachine: Using API Version  1
	I0906 12:38:20.132433   14010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:38:20.132632   14010 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:38:20.132740   14010 main.go:141] libmachine: (offline-docker-273000) Calling .GetState
	I0906 12:38:20.132825   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.132892   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:20.133838   14010 main.go:141] libmachine: (offline-docker-273000) Calling .DriverName
	I0906 12:38:20.197145   14010 out.go:177] * Deleting "offline-docker-273000" in hyperkit ...
	I0906 12:38:20.217935   14010 main.go:141] libmachine: (offline-docker-273000) Calling .Remove
	I0906 12:38:20.218057   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.218070   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.218132   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:20.219080   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.219136   14010 main.go:141] libmachine: (offline-docker-273000) DBG | waiting for graceful shutdown
	I0906 12:38:21.221279   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:21.221341   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:21.222246   14010 main.go:141] libmachine: (offline-docker-273000) DBG | waiting for graceful shutdown
	I0906 12:38:22.224343   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:22.224422   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:22.226040   14010 main.go:141] libmachine: (offline-docker-273000) DBG | waiting for graceful shutdown
	I0906 12:38:23.226320   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:23.226404   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:23.226958   14010 main.go:141] libmachine: (offline-docker-273000) DBG | waiting for graceful shutdown
	I0906 12:38:24.227076   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:24.227159   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:24.227727   14010 main.go:141] libmachine: (offline-docker-273000) DBG | waiting for graceful shutdown
	I0906 12:38:25.228221   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:25.228305   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14055
	I0906 12:38:25.229389   14010 main.go:141] libmachine: (offline-docker-273000) DBG | sending sigkill
	I0906 12:38:25.229399   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0906 12:38:25.243584   14010 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:b1:1f:79:fb:4e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:b1:1f:79:fb:4e
	I0906 12:38:25.243604   14010 start.go:729] Will try again in 5 seconds ...
	I0906 12:38:25.255097   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:38:25 WARN : hyperkit: failed to read stderr: EOF
	I0906 12:38:25.255129   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:38:25 WARN : hyperkit: failed to read stdout: EOF
	I0906 12:38:30.244913   14010 start.go:360] acquireMachinesLock for offline-docker-273000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:39:22.962080   14010 start.go:364] duration metric: took 52.717550377s to acquireMachinesLock for "offline-docker-273000"
	I0906 12:39:22.962122   14010 start.go:93] Provisioning new machine with config: &{Name:offline-docker-273000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-d
ocker-273000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:39:22.962173   14010 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:39:22.983555   14010 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:39:22.983651   14010 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:39:22.983674   14010 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:39:22.992173   14010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58119
	I0906 12:39:22.992528   14010 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:39:22.992860   14010 main.go:141] libmachine: Using API Version  1
	I0906 12:39:22.992869   14010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:39:22.993117   14010 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:39:22.993251   14010 main.go:141] libmachine: (offline-docker-273000) Calling .GetMachineName
	I0906 12:39:22.993349   14010 main.go:141] libmachine: (offline-docker-273000) Calling .DriverName
	I0906 12:39:22.993471   14010 start.go:159] libmachine.API.Create for "offline-docker-273000" (driver="hyperkit")
	I0906 12:39:22.993493   14010 client.go:168] LocalClient.Create starting
	I0906 12:39:22.993522   14010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:39:22.993577   14010 main.go:141] libmachine: Decoding PEM data...
	I0906 12:39:22.993590   14010 main.go:141] libmachine: Parsing certificate...
	I0906 12:39:22.993638   14010 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:39:22.993675   14010 main.go:141] libmachine: Decoding PEM data...
	I0906 12:39:22.993686   14010 main.go:141] libmachine: Parsing certificate...
	I0906 12:39:22.993717   14010 main.go:141] libmachine: Running pre-create checks...
	I0906 12:39:22.993724   14010 main.go:141] libmachine: (offline-docker-273000) Calling .PreCreateCheck
	I0906 12:39:22.993802   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:22.993823   14010 main.go:141] libmachine: (offline-docker-273000) Calling .GetConfigRaw
	I0906 12:39:23.024387   14010 main.go:141] libmachine: Creating machine...
	I0906 12:39:23.024396   14010 main.go:141] libmachine: (offline-docker-273000) Calling .Create
	I0906 12:39:23.024486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:23.024636   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:39:23.024479   14200 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:39:23.024677   14010 main.go:141] libmachine: (offline-docker-273000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:39:23.236709   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:39:23.236605   14200 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/id_rsa...
	I0906 12:39:23.348558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:39:23.348485   14200 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/offline-docker-273000.rawdisk...
	I0906 12:39:23.348568   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Writing magic tar header
	I0906 12:39:23.348577   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Writing SSH key tar header
	I0906 12:39:23.349162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | I0906 12:39:23.349114   14200 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000 ...
	I0906 12:39:23.728284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:23.728304   14010 main.go:141] libmachine: (offline-docker-273000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/hyperkit.pid
	I0906 12:39:23.728339   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Using UUID 9fe7586e-873a-406c-9109-ded4ebd976c7
	I0906 12:39:23.752675   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Generated MAC e:40:ed:d:44:e5
	I0906 12:39:23.752691   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-273000
	I0906 12:39:23.752737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fe7586e-873a-406c-9109-ded4ebd976c7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0906 12:39:23.752774   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9fe7586e-873a-406c-9109-ded4ebd976c7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0906 12:39:23.752857   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9fe7586e-873a-406c-9109-ded4ebd976c7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/offline-docker-273000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage,
/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-273000"}
	I0906 12:39:23.752928   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9fe7586e-873a-406c-9109-ded4ebd976c7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/offline-docker-273000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machi
nes/offline-docker-273000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-273000"
	I0906 12:39:23.752949   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:39:23.755858   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 DEBUG: hyperkit: Pid is 14201
	I0906 12:39:23.756317   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 0
	I0906 12:39:23.756338   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:23.756413   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:23.757358   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:23.757423   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:23.757434   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:23.757476   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:23.757493   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:23.757505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:23.757518   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:23.757530   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:23.757543   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:23.757555   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:23.757562   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:23.757577   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:23.757588   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:23.757619   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:23.757635   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:23.757646   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:23.757657   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:23.757666   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:23.757671   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:23.757686   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:23.757705   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:23.757718   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:23.757733   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:23.757746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:23.757761   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:23.757775   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:23.757792   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:23.757812   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:23.757829   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:23.757842   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:23.757859   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:23.757875   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:23.757892   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:23.757913   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:23.757929   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:23.757943   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:23.757959   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:23.757992   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:23.758004   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:23.763858   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:39:23.771894   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/offline-docker-273000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:39:23.772778   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:39:23.772797   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:39:23.772809   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:39:23.772818   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:39:24.153840   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:39:24.153855   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:39:24.268585   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:39:24.268606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:39:24.268620   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:39:24.268632   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:39:24.269475   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:39:24.269486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:39:25.757771   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 1
	I0906 12:39:25.757792   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:25.757861   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:25.758666   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:25.758732   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:25.758740   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:25.758756   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:25.758765   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:25.758774   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:25.758782   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:25.758801   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:25.758808   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:25.758820   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:25.758833   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:25.758841   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:25.758851   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:25.758858   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:25.758864   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:25.758871   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:25.758877   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:25.758899   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:25.758915   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:25.758923   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:25.758932   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:25.758963   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:25.758979   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:25.758997   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:25.759008   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:25.759018   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:25.759027   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:25.759034   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:25.759039   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:25.759046   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:25.759054   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:25.759061   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:25.759067   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:25.759080   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:25.759092   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:25.759109   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:25.759130   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:25.759141   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:25.759162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:27.759080   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 2
	I0906 12:39:27.759094   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:27.759169   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:27.759979   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:27.760049   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:27.760061   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:27.760073   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:27.760080   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:27.760092   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:27.760106   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:27.760113   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:27.760122   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:27.760139   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:27.760157   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:27.760184   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:27.760221   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:27.760244   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:27.760251   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:27.760258   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:27.760264   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:27.760272   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:27.760288   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:27.760299   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:27.760314   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:27.760329   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:27.760338   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:27.760346   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:27.760354   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:27.760362   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:27.760373   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:27.760382   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:27.760393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:27.760402   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:27.760430   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:27.760443   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:27.760450   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:27.760460   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:27.760471   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:27.760478   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:27.760486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:27.760493   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:27.760502   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:29.716156   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:39:29.716324   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:39:29.716335   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:39:29.735952   14010 main.go:141] libmachine: (offline-docker-273000) DBG | 2024/09/06 12:39:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:39:29.760368   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 3
	I0906 12:39:29.760397   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:29.760593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:29.761895   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:29.762010   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:29.762025   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:29.762037   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:29.762049   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:29.762058   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:29.762066   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:29.762076   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:29.762089   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:29.762099   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:29.762108   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:29.762120   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:29.762129   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:29.762152   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:29.762162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:29.762199   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:29.762219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:29.762230   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:29.762240   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:29.762249   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:29.762261   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:29.762271   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:29.762279   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:29.762289   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:29.762308   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:29.762318   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:29.762329   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:29.762351   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:29.762366   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:29.762388   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:29.762405   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:29.762421   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:29.762433   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:29.762451   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:29.762468   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:29.762480   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:29.762491   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:29.762505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:29.762516   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:31.762502   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 4
	I0906 12:39:31.762521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:31.762555   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:31.763355   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:31.763424   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:31.763444   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:31.763454   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:31.763462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:31.763468   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:31.763477   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:31.763490   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:31.763499   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:31.763510   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:31.763517   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:31.763524   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:31.763532   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:31.763548   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:31.763561   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:31.763569   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:31.763578   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:31.763589   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:31.763599   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:31.763608   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:31.763616   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:31.763624   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:31.763632   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:31.763652   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:31.763664   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:31.763673   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:31.763681   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:31.763689   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:31.763697   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:31.763709   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:31.763720   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:31.763728   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:31.763736   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:31.763742   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:31.763750   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:31.763762   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:31.763777   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:31.763785   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:31.763793   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:33.764392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 5
	I0906 12:39:33.764405   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:33.764415   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:33.765207   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:33.765270   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:33.765281   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:33.765292   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:33.765301   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:33.765309   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:33.765316   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:33.765333   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:33.765344   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:33.765352   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:33.765363   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:33.765373   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:33.765382   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:33.765389   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:33.765397   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:33.765407   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:33.765415   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:33.765422   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:33.765429   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:33.765442   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:33.765451   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:33.765458   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:33.765468   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:33.765476   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:33.765489   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:33.765497   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:33.765505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:33.765512   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:33.765519   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:33.765528   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:33.765537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:33.765551   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:33.765570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:33.765582   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:33.765588   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:33.765604   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:33.765616   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:33.765624   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:33.765633   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:35.766270   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 6
	I0906 12:39:35.766284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:35.766344   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:35.767154   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:35.767179   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:35.767186   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:35.767195   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:35.767213   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:35.767232   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:35.767243   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:35.767250   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:35.767259   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:35.767273   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:35.767282   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:35.767297   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:35.767310   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:35.767319   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:35.767327   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:35.767334   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:35.767343   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:35.767356   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:35.767367   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:35.767375   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:35.767382   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:35.767389   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:35.767396   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:35.767404   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:35.767411   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:35.767425   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:35.767439   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:35.767447   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:35.767455   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:35.767462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:35.767470   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:35.767477   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:35.767486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:35.767505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:35.767511   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:35.767529   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:35.767537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:35.767544   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:35.767553   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:37.768682   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 7
	I0906 12:39:37.768697   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:37.768762   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:37.769512   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:37.769587   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:37.769615   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:37.769627   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:37.769634   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:37.769641   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:37.769647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:37.769678   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:37.769686   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:37.769694   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:37.769708   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:37.769728   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:37.769742   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:37.769755   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:37.769765   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:37.769771   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:37.769778   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:37.769786   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:37.769793   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:37.769801   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:37.769808   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:37.769814   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:37.769820   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:37.769830   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:37.769839   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:37.769848   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:37.769856   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:37.769863   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:37.769871   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:37.769878   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:37.769887   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:37.769894   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:37.769902   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:37.769909   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:37.769915   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:37.769923   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:37.769931   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:37.769943   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:37.769953   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:39.770601   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 8
	I0906 12:39:39.770613   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:39.770707   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:39.771460   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:39.771519   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:39.771535   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:39.771545   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:39.771558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:39.771566   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:39.771574   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:39.771585   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:39.771593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:39.771598   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:39.771606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:39.771612   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:39.771619   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:39.771627   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:39.771643   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:39.771658   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:39.771669   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:39.771678   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:39.771685   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:39.771693   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:39.771701   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:39.771709   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:39.771727   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:39.771739   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:39.771748   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:39.771754   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:39.771766   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:39.771774   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:39.771781   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:39.771805   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:39.771817   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:39.771836   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:39.771851   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:39.771859   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:39.771865   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:39.771872   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:39.771881   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:39.771888   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:39.771897   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:41.771922   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 9
	I0906 12:39:41.771939   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:41.771994   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:41.772749   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:41.772831   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:41.772844   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:41.772855   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:41.772862   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:41.772870   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:41.772876   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:41.772882   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:41.772889   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:41.772909   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:41.772924   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:41.772939   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:41.772950   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:41.772959   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:41.772968   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:41.772983   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:41.772998   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:41.773011   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:41.773019   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:41.773024   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:41.773030   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:41.773042   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:41.773056   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:41.773064   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:41.773070   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:41.773079   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:41.773088   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:41.773103   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:41.773117   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:41.773126   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:41.773132   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:41.773138   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:41.773145   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:41.773154   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:41.773162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:41.773169   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:41.773179   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:41.773188   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:41.773196   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:43.775032   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 10
	I0906 12:39:43.775045   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:43.775143   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:43.775904   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:43.775977   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:43.775989   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:43.776002   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:43.776009   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:43.776017   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:43.776025   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:43.776033   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:43.776043   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:43.776051   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:43.776057   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:43.776065   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:43.776071   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:43.776086   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:43.776096   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:43.776102   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:43.776109   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:43.776122   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:43.776134   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:43.776151   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:43.776164   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:43.776173   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:43.776183   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:43.776194   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:43.776204   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:43.776211   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:43.776219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:43.776235   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:43.776247   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:43.776258   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:43.776268   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:43.776276   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:43.776284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:43.776291   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:43.776304   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:43.776311   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:43.776317   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:43.776326   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:43.776342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:45.777390   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 11
	I0906 12:39:45.777409   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:45.777472   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:45.778273   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:45.778313   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:45.778325   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:45.778337   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:45.778346   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:45.778353   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:45.778360   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:45.778367   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:45.778376   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:45.778383   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:45.778390   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:45.778397   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:45.778405   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:45.778433   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:45.778444   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:45.778454   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:45.778466   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:45.778478   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:45.778487   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:45.778497   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:45.778511   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:45.778522   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:45.778528   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:45.778535   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:45.778543   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:45.778550   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:45.778558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:45.778571   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:45.778584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:45.778593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:45.778601   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:45.778613   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:45.778622   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:45.778630   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:45.778636   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:45.778643   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:45.778652   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:45.778664   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:45.778673   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:47.779433   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 12
	I0906 12:39:47.779446   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:47.779499   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:47.780284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:47.780326   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:47.780334   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:47.780342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:47.780348   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:47.780365   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:47.780377   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:47.780386   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:47.780393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:47.780411   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:47.780418   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:47.780431   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:47.780440   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:47.780457   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:47.780469   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:47.780478   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:47.780486   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:47.780499   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:47.780507   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:47.780513   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:47.780521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:47.780529   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:47.780534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:47.780546   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:47.780554   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:47.780562   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:47.780570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:47.780582   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:47.780591   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:47.780598   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:47.780606   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:47.780613   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:47.780631   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:47.780643   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:47.780651   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:47.780666   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:47.780688   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:47.780697   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:47.780706   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:49.780904   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 13
	I0906 12:39:49.780916   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:49.780940   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:49.781752   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:49.781807   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:49.781818   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:49.781843   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:49.781850   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:49.781870   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:49.781881   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:49.781888   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:49.781896   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:49.781902   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:49.781910   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:49.781918   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:49.781928   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:49.781935   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:49.781941   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:49.781949   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:49.781956   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:49.781963   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:49.781970   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:49.781977   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:49.781985   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:49.781992   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:49.782000   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:49.782008   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:49.782014   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:49.782025   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:49.782033   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:49.782040   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:49.782048   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:49.782064   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:49.782076   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:49.782085   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:49.782093   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:49.782100   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:49.782108   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:49.782115   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:49.782123   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:49.782131   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:49.782138   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:51.782187   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 14
	I0906 12:39:51.782203   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:51.782259   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:51.783023   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:51.783083   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:51.783094   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:51.783105   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:51.783113   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:51.783128   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:51.783148   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:51.783157   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:51.783164   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:51.783180   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:51.783200   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:51.783207   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:51.783217   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:51.783227   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:51.783245   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:51.783254   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:51.783261   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:51.783267   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:51.783273   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:51.783289   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:51.783301   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:51.783320   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:51.783335   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:51.783346   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:51.783353   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:51.783360   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:51.783368   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:51.783376   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:51.783384   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:51.783391   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:51.783398   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:51.783411   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:51.783422   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:51.783431   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:51.783441   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:51.783453   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:51.783462   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:51.783469   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:51.783477   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:53.785326   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 15
	I0906 12:39:53.785340   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:53.785386   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:53.786162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:53.786216   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:53.786225   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:53.786236   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:53.786244   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:53.786252   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:53.786258   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:53.786273   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:53.786284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:53.786291   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:53.786297   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:53.786307   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:53.786314   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:53.786321   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:53.786327   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:53.786333   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:53.786343   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:53.786358   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:53.786369   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:53.786378   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:53.786385   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:53.786392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:53.786405   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:53.786413   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:53.786421   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:53.786428   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:53.786436   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:53.786443   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:53.786451   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:53.786458   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:53.786467   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:53.786474   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:53.786482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:53.786489   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:53.786497   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:53.786504   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:53.786514   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:53.786526   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:53.786534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:55.787508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 16
	I0906 12:39:55.787522   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:55.787601   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:55.788377   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:55.788446   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:55.788456   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:55.788465   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:55.788473   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:55.788482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:55.788489   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:55.788502   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:55.788512   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:55.788521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:55.788537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:55.788563   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:55.788575   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:55.788583   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:55.788591   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:55.788598   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:55.788605   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:55.788624   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:55.788638   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:55.788647   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:55.788656   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:55.788664   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:55.788672   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:55.788679   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:55.788685   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:55.788692   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:55.788700   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:55.788710   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:55.788721   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:55.788737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:55.788757   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:55.788768   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:55.788777   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:55.788785   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:55.788793   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:55.788800   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:55.788809   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:55.788815   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:55.788823   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:57.790658   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 17
	I0906 12:39:57.790674   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:57.790746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:57.791519   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:57.791584   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:57.791593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:57.791611   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:57.791623   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:57.791635   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:57.791642   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:57.791649   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:57.791656   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:57.791664   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:57.791672   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:57.791680   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:57.791687   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:57.791705   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:57.791718   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:57.791726   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:57.791732   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:57.791739   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:57.791745   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:57.791752   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:57.791761   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:57.791768   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:57.791776   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:57.791784   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:57.791790   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:57.791808   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:57.791820   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:57.791829   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:57.791837   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:57.791847   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:57.791856   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:57.791866   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:57.791884   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:57.791898   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:57.791912   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:57.791920   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:57.791928   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:57.791935   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:57.791943   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:59.793795   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 18
	I0906 12:39:59.793813   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:59.793849   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:39:59.794638   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:39:59.794686   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:59.794699   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:59.794714   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:59.794722   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:59.794730   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:59.794737   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:59.794746   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:59.794759   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:59.794768   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:59.794775   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:59.794781   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:59.794794   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:59.794806   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:59.794814   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:59.794823   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:59.794834   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:59.794846   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:59.794863   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:59.794876   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:59.794884   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:59.794892   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:59.794906   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:59.794919   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:59.794930   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:59.794936   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:59.794944   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:59.794953   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:59.794970   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:59.794979   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:59.794987   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:59.794993   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:59.795006   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:59.795020   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:59.795029   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:59.795038   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:59.795052   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:59.795066   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:59.795076   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:01.795264   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 19
	I0906 12:40:01.795277   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:01.795393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:01.796148   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:01.796203   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:01.796230   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:01.796239   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:01.796259   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:01.796270   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:01.796278   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:01.796285   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:01.796296   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:01.796305   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:01.796314   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:01.796328   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:01.796340   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:01.796349   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:01.796356   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:01.796364   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:01.796370   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:01.796379   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:01.796389   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:01.796398   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:01.796404   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:01.796415   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:01.796424   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:01.796432   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:01.796445   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:01.796453   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:01.796460   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:01.796476   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:01.796487   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:01.796500   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:01.796508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:01.796515   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:01.796524   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:01.796531   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:01.796540   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:01.796558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:01.796570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:01.796578   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:01.796585   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:03.797557   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 20
	I0906 12:40:03.797573   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:03.797629   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:03.798419   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:03.798464   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:03.798473   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:03.798495   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:03.798505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:03.798513   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:03.798520   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:03.798530   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:03.798537   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:03.798544   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:03.798558   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:03.798570   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:03.798576   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:03.798583   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:03.798591   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:03.798599   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:03.798605   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:03.798615   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:03.798622   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:03.798630   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:03.798645   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:03.798661   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:03.798669   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:03.798675   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:03.798682   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:03.798690   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:03.798706   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:03.798717   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:03.798733   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:03.798742   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:03.798769   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:03.798785   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:03.798794   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:03.798802   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:03.798811   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:03.798818   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:03.798826   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:03.798833   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:03.798841   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:05.799792   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 21
	I0906 12:40:05.799807   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:05.799843   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:05.800612   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:05.800686   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:05.800697   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:05.800721   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:05.800740   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:05.800751   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:05.800758   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:05.800764   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:05.800771   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:05.800778   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:05.800785   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:05.800792   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:05.800798   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:05.800806   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:05.800817   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:05.800825   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:05.800830   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:05.800841   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:05.800853   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:05.800867   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:05.800875   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:05.800883   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:05.800888   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:05.800907   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:05.800922   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:05.800930   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:05.800937   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:05.800948   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:05.800961   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:05.800977   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:05.800989   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:05.801001   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:05.801010   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:05.801019   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:05.801031   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:05.801043   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:05.801057   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:05.801065   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:05.801073   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:07.802942   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 22
	I0906 12:40:07.802960   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:07.802998   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:07.803790   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:07.803824   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:07.803835   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:07.803860   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:07.803870   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:07.803878   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:07.803886   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:07.803906   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:07.803919   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:07.803936   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:07.803944   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:07.803956   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:07.803965   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:07.803976   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:07.803982   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:07.803989   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:07.803997   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:07.804004   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:07.804012   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:07.804019   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:07.804027   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:07.804033   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:07.804039   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:07.804052   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:07.804064   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:07.804073   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:07.804082   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:07.804090   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:07.804098   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:07.804114   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:07.804127   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:07.804137   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:07.804145   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:07.804153   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:07.804162   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:07.804168   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:07.804175   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:07.804190   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:07.804202   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:09.805061   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 23
	I0906 12:40:09.805074   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:09.805117   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:09.805988   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:09.806056   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:09.806067   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:09.806086   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:09.806107   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:09.806117   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:09.806125   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:09.806141   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:09.806149   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:09.806156   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:09.806167   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:09.806177   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:09.806186   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:09.806193   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:09.806199   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:09.806208   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:09.806215   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:09.806222   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:09.806230   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:09.806245   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:09.806260   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:09.806272   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:09.806280   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:09.806289   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:09.806304   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:09.806312   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:09.806319   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:09.806331   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:09.806339   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:09.806347   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:09.806361   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:09.806375   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:09.806383   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:09.806392   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:09.806399   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:09.806407   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:09.806415   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:09.806429   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:09.806448   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:11.808235   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 24
	I0906 12:40:11.808251   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:11.808338   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:11.809087   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:11.809161   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:11.809174   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:11.809191   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:11.809198   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:11.809207   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:11.809219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:11.809231   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:11.809243   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:11.809258   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:11.809266   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:11.809273   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:11.809281   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:11.809298   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:11.809307   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:11.809315   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:11.809321   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:11.809333   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:11.809346   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:11.809356   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:11.809365   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:11.809372   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:11.809378   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:11.809385   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:11.809393   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:11.809400   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:11.809407   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:11.809414   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:11.809423   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:11.809430   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:11.809439   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:11.809450   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:11.809458   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:11.809466   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:11.809475   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:11.809483   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:11.809489   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:11.809509   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:11.809521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:13.809741   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 25
	I0906 12:40:13.809756   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:13.809835   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:13.810598   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:13.810661   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:13.810672   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:13.810680   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:13.810686   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:13.810712   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:13.810721   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:13.810727   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:13.810733   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:13.810740   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:13.810747   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:13.810754   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:13.810761   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:13.810770   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:13.810780   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:13.810788   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:13.810796   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:13.810803   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:13.810811   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:13.810818   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:13.810825   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:13.810832   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:13.810840   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:13.810845   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:13.810864   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:13.810875   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:13.810888   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:13.810898   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:13.810907   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:13.810915   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:13.810931   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:13.810952   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:13.810960   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:13.810967   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:13.810976   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:13.810984   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:13.810993   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:13.811000   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:13.811006   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:15.812871   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 26
	I0906 12:40:15.812887   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:15.812915   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:15.813704   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:15.813770   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:15.813782   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:15.813791   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:15.813801   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:15.813810   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:15.813820   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:15.813828   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:15.813838   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:15.813845   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:15.813855   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:15.813862   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:15.813876   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:15.813888   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:15.813895   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:15.813901   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:15.813913   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:15.813927   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:15.813937   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:15.813945   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:15.813952   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:15.813958   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:15.813964   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:15.813973   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:15.813979   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:15.813996   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:15.814005   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:15.814012   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:15.814020   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:15.814036   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:15.814050   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:15.814059   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:15.814071   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:15.814078   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:15.814086   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:15.814093   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:15.814099   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:15.814106   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:15.814114   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:17.814062   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 27
	I0906 12:40:17.814075   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:17.814153   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:17.814919   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:17.815005   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:17.815017   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:17.815027   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:17.815034   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:17.815043   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:17.815050   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:17.815057   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:17.815069   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:17.815077   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:17.815082   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:17.815097   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:17.815110   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:17.815118   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:17.815126   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:17.815134   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:17.815142   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:17.815163   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:17.815179   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:17.815187   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:17.815195   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:17.815211   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:17.815227   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:17.815244   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:17.815254   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:17.815264   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:17.815271   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:17.815280   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:17.815287   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:17.815294   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:17.815302   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:17.815319   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:17.815340   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:17.815357   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:17.815371   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:17.815380   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:17.815396   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:17.815404   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:17.815410   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:19.815269   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 28
	I0906 12:40:19.815286   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:19.815354   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:19.816134   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:19.816194   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:19.816205   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:19.816223   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:19.816237   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:19.816244   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:19.816259   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:19.816275   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:19.816284   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:19.816290   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:19.816297   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:19.816304   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:19.816311   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:19.816317   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:19.816325   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:19.816342   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:19.816353   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:19.816362   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:19.816369   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:19.816375   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:19.816383   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:19.816389   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:19.816403   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:19.816410   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:19.816422   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:19.816430   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:19.816444   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:19.816457   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:19.816464   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:19.816473   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:19.816481   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:19.816491   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:19.816497   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:19.816503   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:19.816508   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:19.816514   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:19.816521   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:19.816528   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:19.816534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:21.818361   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Attempt 29
	I0906 12:40:21.818377   14010 main.go:141] libmachine: (offline-docker-273000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:21.818440   14010 main.go:141] libmachine: (offline-docker-273000) DBG | hyperkit pid from json: 14201
	I0906 12:40:21.819219   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Searching for e:40:ed:d:44:e5 in /var/db/dhcpd_leases ...
	I0906 12:40:21.819302   14010 main.go:141] libmachine: (offline-docker-273000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:21.819313   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:21.819330   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:21.819339   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:21.819366   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:21.819377   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:21.819404   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:21.819416   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:21.819425   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:21.819442   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:21.819454   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:21.819472   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:21.819482   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:21.819505   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:21.819517   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:21.819526   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:21.819534   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:21.819541   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:21.819549   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:21.819556   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:21.819565   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:21.819580   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:21.819593   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:21.819602   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:21.819607   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:21.819615   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:21.819623   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:21.819630   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:21.819637   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:21.819649   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:21.819660   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:21.819677   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:21.819696   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:21.819713   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:21.819724   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:21.819740   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:21.819753   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:21.819763   14010 main.go:141] libmachine: (offline-docker-273000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:23.819555   14010 client.go:171] duration metric: took 1m0.826530168s to LocalClient.Create
	I0906 12:40:25.821666   14010 start.go:128] duration metric: took 1m2.859975502s to createHost
	I0906 12:40:25.821681   14010 start.go:83] releasing machines lock for "offline-docker-273000", held for 1m2.860067806s
	W0906 12:40:25.821773   14010 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-273000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:40:ed:d:44:e5
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-273000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:40:ed:d:44:e5
	I0906 12:40:25.884804   14010 out.go:201] 
	W0906 12:40:25.907012   14010 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:40:ed:d:44:e5
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:40:ed:d:44:e5
	W0906 12:40:25.907026   14010 out.go:270] * 
	* 
	W0906 12:40:25.907666   14010 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:40:25.969912   14010 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-273000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-09-06 12:40:26.179093 -0700 PDT m=+4303.340855470
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-273000 -n offline-docker-273000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-273000 -n offline-docker-273000: exit status 7 (93.126332ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:40:26.269967   14213 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:40:26.269992   14213 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-273000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-273000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-273000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-273000: (5.255749248s)
--- FAIL: TestOffline (195.36s)

                                                
                                    
x
+
TestAddons/serial/Volcano (198.53s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 15.058623ms
addons_test.go:905: volcano-admission stabilized in 15.196356ms
addons_test.go:913: volcano-controller stabilized in 15.535658ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-k5qvm" [541b565d-786a-4564-bcbf-15c15193057d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003500964s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-tmx68" [acf1202f-d330-4e8d-9413-17abe08a1b8c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002816683s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-tntd9" [669aa563-3680-4076-9a86-1c3945b313fc] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004540381s
addons_test.go:932: (dbg) Run:  kubectl --context addons-565000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-565000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-565000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [cbdf8305-0c1d-4997-b246-c935716dc886] Pending
helpers_test.go:344: "test-job-nginx-0" [cbdf8305-0c1d-4997-b246-c935716dc886] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-565000 -n addons-565000
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-06 11:36:14.877141 -0700 PDT m=+452.140315275
addons_test.go:964: (dbg) Run:  kubectl --context addons-565000 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-565000 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-d7d3af94-30e2-4dbe-ae76-ed4f8698fb40
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-57j2r (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-57j2r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-565000 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-565000 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-565000 -n addons-565000
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 logs -n 25: (2.357273621s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-747000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-747000              | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| start   | -o=json --download-only              | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | -p download-only-709000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-709000              | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-747000              | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-709000              | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| start   | --download-only -p                   | binary-mirror-050000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | binary-mirror-050000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53785               |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-050000              | binary-mirror-050000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| addons  | disable dashboard -p                 | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | addons-565000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | addons-565000                        |                      |         |         |                     |                     |
	| start   | -p addons-565000 --wait=true         | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:32 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=hyperkit  --addons=ingress  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:29:15
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:29:15.883388    8455 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:29:15.883642    8455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:15.883648    8455 out.go:358] Setting ErrFile to fd 2...
	I0906 11:29:15.883652    8455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:15.883826    8455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 11:29:15.885262    8455 out.go:352] Setting JSON to false
	I0906 11:29:15.907483    8455 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8926,"bootTime":1725638429,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 11:29:15.907577    8455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:29:15.929590    8455 out.go:177] * [addons-565000] minikube v1.34.0 on Darwin 14.6.1
	I0906 11:29:15.971093    8455 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:29:15.971196    8455 notify.go:220] Checking for updates...
	I0906 11:29:16.012914    8455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:29:16.034449    8455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 11:29:16.055383    8455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:29:16.078183    8455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 11:29:16.099281    8455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:29:16.120935    8455 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:29:16.151256    8455 out.go:177] * Using the hyperkit driver based on user configuration
	I0906 11:29:16.193440    8455 start.go:297] selected driver: hyperkit
	I0906 11:29:16.193465    8455 start.go:901] validating driver "hyperkit" against <nil>
	I0906 11:29:16.193483    8455 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:29:16.197925    8455 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:16.198041    8455 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 11:29:16.206646    8455 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 11:29:16.210585    8455 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:16.210604    8455 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 11:29:16.210634    8455 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:29:16.210830    8455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:29:16.210890    8455 cni.go:84] Creating CNI manager for ""
	I0906 11:29:16.210904    8455 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:16.210912    8455 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 11:29:16.210985    8455 start.go:340] cluster config:
	{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I0906 11:29:16.211074    8455 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:16.232228    8455 out.go:177] * Starting "addons-565000" primary control-plane node in "addons-565000" cluster
	I0906 11:29:16.253269    8455 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:16.253321    8455 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 11:29:16.253338    8455 cache.go:56] Caching tarball of preloaded images
	I0906 11:29:16.253517    8455 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 11:29:16.253531    8455 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 11:29:16.253854    8455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/config.json ...
	I0906 11:29:16.253877    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/config.json: {Name:mkafc2af2209682ce31cfc92a3c90d867b3f2254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:16.254271    8455 start.go:360] acquireMachinesLock for addons-565000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 11:29:16.254423    8455 start.go:364] duration metric: took 135.711µs to acquireMachinesLock for "addons-565000"
	I0906 11:29:16.254448    8455 start.go:93] Provisioning new machine with config: &{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:29:16.254514    8455 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 11:29:16.296199    8455 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 11:29:16.296445    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:16.296512    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:16.306646    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53792
	I0906 11:29:16.306979    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:16.307387    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:16.307396    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:16.307634    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:16.307755    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:16.307849    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:16.307965    8455 start.go:159] libmachine.API.Create for "addons-565000" (driver="hyperkit")
	I0906 11:29:16.307991    8455 client.go:168] LocalClient.Create starting
	I0906 11:29:16.308030    8455 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 11:29:16.380732    8455 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 11:29:16.444278    8455 main.go:141] libmachine: Running pre-create checks...
	I0906 11:29:16.444287    8455 main.go:141] libmachine: (addons-565000) Calling .PreCreateCheck
	I0906 11:29:16.444485    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:16.444578    8455 main.go:141] libmachine: (addons-565000) Calling .GetConfigRaw
	I0906 11:29:16.444992    8455 main.go:141] libmachine: Creating machine...
	I0906 11:29:16.445003    8455 main.go:141] libmachine: (addons-565000) Calling .Create
	I0906 11:29:16.445091    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:16.445216    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.445088    8463 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 11:29:16.445281    8455 main.go:141] libmachine: (addons-565000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 11:29:16.637769    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.637670    8463 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa...
	I0906 11:29:16.794664    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.794585    8463 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/addons-565000.rawdisk...
	I0906 11:29:16.794686    8455 main.go:141] libmachine: (addons-565000) DBG | Writing magic tar header
	I0906 11:29:16.794696    8455 main.go:141] libmachine: (addons-565000) DBG | Writing SSH key tar header
	I0906 11:29:16.795443    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.795365    8463 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000 ...
	I0906 11:29:17.182872    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:17.182898    8455 main.go:141] libmachine: (addons-565000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/hyperkit.pid
	I0906 11:29:17.182987    8455 main.go:141] libmachine: (addons-565000) DBG | Using UUID a75d8b75-3333-4f7d-aa62-647125e97870
	I0906 11:29:17.412086    8455 main.go:141] libmachine: (addons-565000) DBG | Generated MAC ae:ba:57:3d:b2:90
	I0906 11:29:17.412126    8455 main.go:141] libmachine: (addons-565000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-565000
	I0906 11:29:17.412238    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75d8b75-3333-4f7d-aa62-647125e97870", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 11:29:17.412301    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75d8b75-3333-4f7d-aa62-647125e97870", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 11:29:17.412374    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a75d8b75-3333-4f7d-aa62-647125e97870", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/addons-565000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/addons-565000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-565000"}
	I0906 11:29:17.412419    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a75d8b75-3333-4f7d-aa62-647125e97870 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/addons-565000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-565000"
	I0906 11:29:17.412441    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 11:29:17.415233    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Pid is 8468
	I0906 11:29:17.415677    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 0
	I0906 11:29:17.415691    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:17.415745    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:17.416589    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:17.416679    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:17.416694    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:17.416703    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:17.416714    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:17.416740    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:17.416752    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:17.416775    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:17.416787    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:17.416795    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:17.416803    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:17.416824    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:17.416833    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:17.416840    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:17.416848    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:17.416856    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:17.416863    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:17.416868    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:17.416882    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:17.416893    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:17.416917    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:17.422855    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 11:29:17.473882    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 11:29:17.474516    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 11:29:17.474538    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 11:29:17.474546    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 11:29:17.474554    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 11:29:18.007896    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 11:29:18.007909    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 11:29:18.123927    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 11:29:18.123963    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 11:29:18.123978    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 11:29:18.123991    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 11:29:18.124837    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 11:29:18.124847    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 11:29:19.417139    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 1
	I0906 11:29:19.417154    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:19.417210    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:19.417995    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:19.418020    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:19.418032    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:19.418044    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:19.418051    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:19.418057    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:19.418072    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:19.418097    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:19.418111    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:19.418120    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:19.418129    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:19.418148    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:19.418160    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:19.418168    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:19.418177    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:19.418183    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:19.418189    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:19.418194    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:19.418207    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:19.418215    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:19.418223    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:21.418795    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 2
	I0906 11:29:21.418809    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:21.418939    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:21.419762    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:21.419839    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:21.419849    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:21.419858    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:21.419868    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:21.419876    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:21.419882    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:21.419889    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:21.419895    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:21.419901    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:21.419909    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:21.419923    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:21.419929    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:21.419943    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:21.419966    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:21.419973    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:21.419981    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:21.419989    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:21.419996    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:21.420003    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:21.420012    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:23.420726    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 3
	I0906 11:29:23.420742    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:23.420820    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:23.421608    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:23.421632    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:23.421640    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:23.421648    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:23.421656    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:23.421669    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:23.421683    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:23.421692    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:23.421700    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:23.421712    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:23.421718    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:23.421724    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:23.421733    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:23.421745    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:23.421756    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:23.421769    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:23.421792    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:23.421799    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:23.421808    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:23.421815    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:23.421822    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:23.722898    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 11:29:23.722927    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 11:29:23.722937    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 11:29:23.741397    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 11:29:25.421888    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 4
	I0906 11:29:25.421902    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:25.421987    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:25.422765    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:25.422824    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:25.422837    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:25.422856    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:25.422863    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:25.422869    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:25.422877    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:25.422885    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:25.422891    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:25.422897    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:25.422903    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:25.422915    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:25.422923    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:25.422929    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:25.422937    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:25.422944    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:25.422951    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:25.422957    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:25.422966    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:25.422972    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:25.422982    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:27.423299    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 5
	I0906 11:29:27.423330    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:27.423401    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:27.424840    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:27.424957    8455 main.go:141] libmachine: (addons-565000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0906 11:29:27.424973    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 11:29:27.425004    8455 main.go:141] libmachine: (addons-565000) DBG | Found match: ae:ba:57:3d:b2:90
	I0906 11:29:27.425015    8455 main.go:141] libmachine: (addons-565000) DBG | IP: 192.169.0.21
	I0906 11:29:27.425068    8455 main.go:141] libmachine: (addons-565000) Calling .GetConfigRaw
	I0906 11:29:27.425921    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:27.426067    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:27.426198    8455 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 11:29:27.426215    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:27.426358    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:27.426421    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:27.427450    8455 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 11:29:27.427498    8455 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 11:29:27.427504    8455 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 11:29:27.427509    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:27.427647    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:27.427767    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:27.427907    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:27.428026    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:27.428618    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:27.428776    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:27.428783    8455 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 11:29:28.485491    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:29:28.485504    8455 main.go:141] libmachine: Detecting the provisioner...
	I0906 11:29:28.485510    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.485638    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.485732    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.485825    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.485921    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.486063    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:28.486203    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:28.486211    8455 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 11:29:28.541039    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 11:29:28.541082    8455 main.go:141] libmachine: found compatible host: buildroot
	I0906 11:29:28.541087    8455 main.go:141] libmachine: Provisioning with buildroot...
	I0906 11:29:28.541093    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:28.541225    8455 buildroot.go:166] provisioning hostname "addons-565000"
	I0906 11:29:28.541236    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:28.541348    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.541440    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.541519    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.541619    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.541714    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.541833    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:28.541981    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:28.541990    8455 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-565000 && echo "addons-565000" | sudo tee /etc/hostname
	I0906 11:29:28.605687    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-565000
	
	I0906 11:29:28.605704    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.605851    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.605953    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.606045    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.606130    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.606273    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:28.606426    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:28.606438    8455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-565000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-565000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-565000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 11:29:28.668881    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:29:28.668904    8455 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 11:29:28.668920    8455 buildroot.go:174] setting up certificates
	I0906 11:29:28.668931    8455 provision.go:84] configureAuth start
	I0906 11:29:28.668938    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:28.669069    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:28.669154    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.669232    8455 provision.go:143] copyHostCerts
	I0906 11:29:28.669320    8455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 11:29:28.669571    8455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 11:29:28.669745    8455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 11:29:28.669880    8455 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.addons-565000 san=[127.0.0.1 192.169.0.21 addons-565000 localhost minikube]
	I0906 11:29:28.989037    8455 provision.go:177] copyRemoteCerts
	I0906 11:29:28.989099    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 11:29:28.989116    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.989268    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.989372    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.989487    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.989592    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:29.023416    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 11:29:29.043644    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 11:29:29.064209    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 11:29:29.084761    8455 provision.go:87] duration metric: took 415.817386ms to configureAuth
	I0906 11:29:29.084776    8455 buildroot.go:189] setting minikube options for container-runtime
	I0906 11:29:29.084909    8455 config.go:182] Loaded profile config "addons-565000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:29.084925    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:29.085066    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:29.085161    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:29.085288    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.085426    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.085537    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:29.085695    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:29.085860    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:29.085868    8455 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 11:29:29.142353    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 11:29:29.142368    8455 buildroot.go:70] root file system type: tmpfs
	I0906 11:29:29.142441    8455 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 11:29:29.142454    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:29.142582    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:29.142678    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.142766    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.142860    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:29.142978    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:29.143115    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:29.143161    8455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 11:29:29.208562    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 11:29:29.208589    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:29.208722    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:29.208806    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.208884    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.208971    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:29.209114    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:29.209258    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:29.209271    8455 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 11:29:30.749304    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 11:29:30.749319    8455 main.go:141] libmachine: Checking connection to Docker...
	I0906 11:29:30.749326    8455 main.go:141] libmachine: (addons-565000) Calling .GetURL
	I0906 11:29:30.749466    8455 main.go:141] libmachine: Docker is up and running!
	I0906 11:29:30.749473    8455 main.go:141] libmachine: Reticulating splines...
	I0906 11:29:30.749478    8455 client.go:171] duration metric: took 14.441523854s to LocalClient.Create
	I0906 11:29:30.749497    8455 start.go:167] duration metric: took 14.441575146s to libmachine.API.Create "addons-565000"
	I0906 11:29:30.749505    8455 start.go:293] postStartSetup for "addons-565000" (driver="hyperkit")
	I0906 11:29:30.749512    8455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 11:29:30.749523    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.749678    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 11:29:30.749696    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.749791    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.749878    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.749968    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.750059    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:30.794342    8455 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 11:29:30.797497    8455 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 11:29:30.797511    8455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 11:29:30.797619    8455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 11:29:30.797671    8455 start.go:296] duration metric: took 48.161941ms for postStartSetup
	I0906 11:29:30.797695    8455 main.go:141] libmachine: (addons-565000) Calling .GetConfigRaw
	I0906 11:29:30.798272    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:30.798426    8455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/config.json ...
	I0906 11:29:30.798750    8455 start.go:128] duration metric: took 14.544266633s to createHost
	I0906 11:29:30.798763    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.798855    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.798951    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.799052    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.799145    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.799260    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:30.799382    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:30.799389    8455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 11:29:30.859504    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647369.826213571
	
	I0906 11:29:30.859517    8455 fix.go:216] guest clock: 1725647369.826213571
	I0906 11:29:30.859522    8455 fix.go:229] Guest: 2024-09-06 11:29:29.826213571 -0700 PDT Remote: 2024-09-06 11:29:30.798758 -0700 PDT m=+14.950664910 (delta=-972.544429ms)
	I0906 11:29:30.859543    8455 fix.go:200] guest clock delta is within tolerance: -972.544429ms
	I0906 11:29:30.859547    8455 start.go:83] releasing machines lock for "addons-565000", held for 14.605157821s
	I0906 11:29:30.859567    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.859695    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:30.859777    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.860039    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.860136    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.860225    8455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 11:29:30.860254    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.860316    8455 ssh_runner.go:195] Run: cat /version.json
	I0906 11:29:30.860329    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.860342    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.860440    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.860445    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.860531    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.860555    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.860631    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:30.860647    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.860731    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:30.949966    8455 ssh_runner.go:195] Run: systemctl --version
	I0906 11:29:30.955203    8455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 11:29:30.959375    8455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 11:29:30.959418    8455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 11:29:30.971757    8455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 11:29:30.971780    8455 start.go:495] detecting cgroup driver to use...
	I0906 11:29:30.971894    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:29:30.986977    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 11:29:30.996071    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 11:29:31.005096    8455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 11:29:31.005145    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 11:29:31.014136    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:29:31.023055    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 11:29:31.032062    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:29:31.041022    8455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 11:29:31.050308    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 11:29:31.059248    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 11:29:31.068176    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 11:29:31.077411    8455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 11:29:31.085555    8455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 11:29:31.093479    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:31.201145    8455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 11:29:31.222253    8455 start.go:495] detecting cgroup driver to use...
	I0906 11:29:31.222341    8455 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 11:29:31.237591    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:29:31.253485    8455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 11:29:31.268036    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:29:31.278338    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:29:31.288358    8455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 11:29:31.309700    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:29:31.320206    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:29:31.335018    8455 ssh_runner.go:195] Run: which cri-dockerd
	I0906 11:29:31.337800    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 11:29:31.345095    8455 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 11:29:31.358404    8455 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 11:29:31.456884    8455 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 11:29:31.564139    8455 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 11:29:31.564212    8455 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 11:29:31.578809    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:31.694306    8455 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:29:33.995914    8455 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301595395s)
	I0906 11:29:33.995975    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 11:29:34.007272    8455 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 11:29:34.021082    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:29:34.032208    8455 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 11:29:34.139494    8455 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 11:29:34.248209    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:34.362394    8455 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 11:29:34.377274    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:29:34.388527    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:34.483945    8455 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 11:29:34.551558    8455 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 11:29:34.551679    8455 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 11:29:34.556036    8455 start.go:563] Will wait 60s for crictl version
	I0906 11:29:34.556089    8455 ssh_runner.go:195] Run: which crictl
	I0906 11:29:34.559101    8455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 11:29:34.585938    8455 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 11:29:34.586008    8455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:29:34.603332    8455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:29:34.640136    8455 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 11:29:34.640230    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:34.640627    8455 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 11:29:34.645004    8455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 11:29:34.655288    8455 kubeadm.go:883] updating cluster {Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 11:29:34.655356    8455 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:34.655414    8455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:29:34.667257    8455 docker.go:685] Got preloaded images: 
	I0906 11:29:34.667268    8455 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0906 11:29:34.667320    8455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 11:29:34.675463    8455 ssh_runner.go:195] Run: which lz4
	I0906 11:29:34.678431    8455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 11:29:34.681450    8455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 11:29:34.681468    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0906 11:29:35.623019    8455 docker.go:649] duration metric: took 944.631008ms to copy over tarball
	I0906 11:29:35.623081    8455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 11:29:38.069500    8455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.446408457s)
	I0906 11:29:38.069514    8455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 11:29:38.095973    8455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 11:29:38.105513    8455 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0906 11:29:38.119308    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:38.219825    8455 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:29:40.635437    8455 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.415599665s)
	I0906 11:29:40.635531    8455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:29:40.656401    8455 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 11:29:40.656422    8455 cache_images.go:84] Images are preloaded, skipping loading
	I0906 11:29:40.656434    8455 kubeadm.go:934] updating node { 192.169.0.21 8443 v1.31.0 docker true true} ...
	I0906 11:29:40.656522    8455 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-565000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 11:29:40.656588    8455 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 11:29:40.695005    8455 cni.go:84] Creating CNI manager for ""
	I0906 11:29:40.695025    8455 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:40.695036    8455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 11:29:40.695050    8455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.21 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-565000 NodeName:addons-565000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 11:29:40.695156    8455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-565000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 11:29:40.695220    8455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 11:29:40.703707    8455 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 11:29:40.703756    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 11:29:40.712181    8455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 11:29:40.725977    8455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 11:29:40.739417    8455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0906 11:29:40.752976    8455 ssh_runner.go:195] Run: grep 192.169.0.21	control-plane.minikube.internal$ /etc/hosts
	I0906 11:29:40.756627    8455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 11:29:40.767138    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:40.874555    8455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:29:40.890663    8455 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000 for IP: 192.169.0.21
	I0906 11:29:40.890677    8455 certs.go:194] generating shared ca certs ...
	I0906 11:29:40.890688    8455 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:40.914930    8455 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 11:29:41.039754    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt ...
	I0906 11:29:41.039769    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt: {Name:mk0d4b22561f1ead1381b05661aa88ebed63b123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.040121    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key ...
	I0906 11:29:41.040129    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key: {Name:mk31784e271a5fc246284ae6ec9ced9f17941bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.040332    8455 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 11:29:41.169434    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt ...
	I0906 11:29:41.169449    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt: {Name:mkaee423f73427fa66087c4472bcd548d6c5d062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.169752    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key ...
	I0906 11:29:41.169760    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key: {Name:mk161526db6120d6dbaaf8eaf1186e0c40edb514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.169969    8455 certs.go:256] generating profile certs ...
	I0906 11:29:41.170022    8455 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.key
	I0906 11:29:41.170035    8455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt with IP's: []
	I0906 11:29:41.239632    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt ...
	I0906 11:29:41.239650    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: {Name:mke2a26b02855371d7edc15c55e24e65da8b67c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.239958    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.key ...
	I0906 11:29:41.239967    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.key: {Name:mk3c075150ed2df82f62269c4bd954c515fa0583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.240192    8455 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca
	I0906 11:29:41.240215    8455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.21]
	I0906 11:29:41.336445    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca ...
	I0906 11:29:41.336462    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca: {Name:mk91b2ee67c89c47e1d49f9bb6d42fbfd0bfdd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.336808    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca ...
	I0906 11:29:41.336818    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca: {Name:mkc44f6221b70f1996427e602350a1814ac148ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.337041    8455 certs.go:381] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt
	I0906 11:29:41.337269    8455 certs.go:385] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key
	I0906 11:29:41.337441    8455 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key
	I0906 11:29:41.337462    8455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt with IP's: []
	I0906 11:29:41.454677    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt ...
	I0906 11:29:41.454692    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt: {Name:mk79c9ec2cf3aec157dc0c69fc61ebe245f21c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.455011    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key ...
	I0906 11:29:41.455026    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key: {Name:mk653cc99373bf1ec78acbf11c780a3dc4359bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.455471    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 11:29:41.455522    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 11:29:41.455554    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 11:29:41.455584    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 11:29:41.456140    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 11:29:41.476588    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 11:29:41.495712    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 11:29:41.515425    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 11:29:41.544278    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 11:29:41.572544    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 11:29:41.592257    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 11:29:41.612143    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 11:29:41.631437    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 11:29:41.651353    8455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 11:29:41.665525    8455 ssh_runner.go:195] Run: openssl version
	I0906 11:29:41.670483    8455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 11:29:41.680279    8455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:41.683774    8455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6  2024 /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:41.683809    8455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:41.688035    8455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 11:29:41.697222    8455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 11:29:41.700213    8455 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 11:29:41.700259    8455 kubeadm.go:392] StartCluster: {Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:29:41.700348    8455 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 11:29:41.712942    8455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 11:29:41.721902    8455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 11:29:41.730279    8455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 11:29:41.738350    8455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 11:29:41.738358    8455 kubeadm.go:157] found existing configuration files:
	
	I0906 11:29:41.738395    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 11:29:41.746116    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 11:29:41.746158    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 11:29:41.754029    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 11:29:41.762605    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 11:29:41.762660    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 11:29:41.770800    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 11:29:41.778477    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 11:29:41.778520    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 11:29:41.786466    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 11:29:41.794132    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 11:29:41.794169    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 11:29:41.801983    8455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 11:29:41.835594    8455 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 11:29:41.835672    8455 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 11:29:41.913800    8455 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 11:29:41.913897    8455 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 11:29:41.913973    8455 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 11:29:41.922430    8455 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 11:29:41.931910    8455 out.go:235]   - Generating certificates and keys ...
	I0906 11:29:41.932004    8455 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 11:29:41.932061    8455 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 11:29:42.134277    8455 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 11:29:42.212129    8455 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 11:29:42.460420    8455 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 11:29:42.954755    8455 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 11:29:43.251036    8455 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 11:29:43.251158    8455 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-565000 localhost] and IPs [192.169.0.21 127.0.0.1 ::1]
	I0906 11:29:43.328007    8455 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 11:29:43.328148    8455 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-565000 localhost] and IPs [192.169.0.21 127.0.0.1 ::1]
	I0906 11:29:43.474236    8455 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 11:29:43.705955    8455 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 11:29:44.021455    8455 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 11:29:44.021607    8455 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 11:29:44.247495    8455 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 11:29:44.516381    8455 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 11:29:44.975732    8455 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 11:29:45.075378    8455 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 11:29:45.126391    8455 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 11:29:45.126858    8455 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 11:29:45.128655    8455 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 11:29:45.150072    8455 out.go:235]   - Booting up control plane ...
	I0906 11:29:45.150146    8455 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 11:29:45.150211    8455 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 11:29:45.150267    8455 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 11:29:45.150350    8455 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 11:29:45.151014    8455 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 11:29:45.151110    8455 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 11:29:45.255434    8455 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 11:29:45.255548    8455 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 11:29:46.255712    8455 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001186147s
	I0906 11:29:46.255799    8455 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 11:29:50.754321    8455 kubeadm.go:310] [api-check] The API server is healthy after 4.50183121s
	I0906 11:29:50.763214    8455 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 11:29:50.774769    8455 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 11:29:50.786473    8455 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 11:29:50.786624    8455 kubeadm.go:310] [mark-control-plane] Marking the node addons-565000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 11:29:50.793440    8455 kubeadm.go:310] [bootstrap-token] Using token: v414ok.q4r5c1ktsisywdo3
	I0906 11:29:50.831636    8455 out.go:235]   - Configuring RBAC rules ...
	I0906 11:29:50.831770    8455 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 11:29:50.875990    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 11:29:50.880177    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 11:29:50.882314    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 11:29:50.884187    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 11:29:50.921970    8455 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 11:29:51.164312    8455 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 11:29:51.583591    8455 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 11:29:52.159842    8455 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 11:29:52.160656    8455 kubeadm.go:310] 
	I0906 11:29:52.160707    8455 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 11:29:52.160712    8455 kubeadm.go:310] 
	I0906 11:29:52.160791    8455 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 11:29:52.160799    8455 kubeadm.go:310] 
	I0906 11:29:52.160829    8455 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 11:29:52.160885    8455 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 11:29:52.160928    8455 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 11:29:52.160940    8455 kubeadm.go:310] 
	I0906 11:29:52.160984    8455 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 11:29:52.160989    8455 kubeadm.go:310] 
	I0906 11:29:52.161032    8455 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 11:29:52.161038    8455 kubeadm.go:310] 
	I0906 11:29:52.161086    8455 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 11:29:52.161158    8455 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 11:29:52.161212    8455 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 11:29:52.161223    8455 kubeadm.go:310] 
	I0906 11:29:52.161299    8455 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 11:29:52.161367    8455 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 11:29:52.161375    8455 kubeadm.go:310] 
	I0906 11:29:52.161443    8455 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v414ok.q4r5c1ktsisywdo3 \
	I0906 11:29:52.161526    8455 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:45446d42e4448e7605f26f9a5cfb01778d08c7c0d429a2f5a46c753d1be13709 \
	I0906 11:29:52.161542    8455 kubeadm.go:310] 	--control-plane 
	I0906 11:29:52.161550    8455 kubeadm.go:310] 
	I0906 11:29:52.161626    8455 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 11:29:52.161635    8455 kubeadm.go:310] 
	I0906 11:29:52.161703    8455 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v414ok.q4r5c1ktsisywdo3 \
	I0906 11:29:52.161782    8455 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:45446d42e4448e7605f26f9a5cfb01778d08c7c0d429a2f5a46c753d1be13709 
	I0906 11:29:52.162086    8455 kubeadm.go:310] W0906 18:29:40.808614    1575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 11:29:52.162318    8455 kubeadm.go:310] W0906 18:29:40.809137    1575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 11:29:52.162407    8455 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 11:29:52.162427    8455 cni.go:84] Creating CNI manager for ""
	I0906 11:29:52.162440    8455 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:52.220101    8455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 11:29:52.241095    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 11:29:52.249649    8455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 11:29:52.265503    8455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 11:29:52.265585    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:52.265594    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-565000 minikube.k8s.io/updated_at=2024_09_06T11_29_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-565000 minikube.k8s.io/primary=true
	I0906 11:29:52.292049    8455 ops.go:34] apiserver oom_adj: -16
	I0906 11:29:52.360649    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:52.861465    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:53.360705    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:53.860711    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:54.360781    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:54.861517    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:55.360895    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:55.860733    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:56.360803    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:56.420219    8455 kubeadm.go:1113] duration metric: took 4.154703743s to wait for elevateKubeSystemPrivileges
	I0906 11:29:56.420238    8455 kubeadm.go:394] duration metric: took 14.720029334s to StartCluster
	I0906 11:29:56.420254    8455 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:56.420424    8455 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:29:56.420653    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:56.420911    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 11:29:56.420937    8455 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:29:56.420963    8455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 11:29:56.421015    8455 addons.go:69] Setting yakd=true in profile "addons-565000"
	I0906 11:29:56.421030    8455 addons.go:69] Setting inspektor-gadget=true in profile "addons-565000"
	I0906 11:29:56.421041    8455 addons.go:234] Setting addon yakd=true in "addons-565000"
	I0906 11:29:56.421037    8455 addons.go:69] Setting storage-provisioner=true in profile "addons-565000"
	I0906 11:29:56.421049    8455 addons.go:69] Setting volcano=true in profile "addons-565000"
	I0906 11:29:56.421074    8455 addons.go:69] Setting gcp-auth=true in profile "addons-565000"
	I0906 11:29:56.421081    8455 addons.go:69] Setting ingress-dns=true in profile "addons-565000"
	I0906 11:29:56.421089    8455 addons.go:69] Setting metrics-server=true in profile "addons-565000"
	I0906 11:29:56.421107    8455 addons.go:69] Setting cloud-spanner=true in profile "addons-565000"
	I0906 11:29:56.421108    8455 addons.go:69] Setting volumesnapshots=true in profile "addons-565000"
	I0906 11:29:56.421095    8455 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-565000"
	I0906 11:29:56.421125    8455 addons.go:234] Setting addon metrics-server=true in "addons-565000"
	I0906 11:29:56.421124    8455 addons.go:234] Setting addon volcano=true in "addons-565000"
	I0906 11:29:56.421136    8455 addons.go:234] Setting addon cloud-spanner=true in "addons-565000"
	I0906 11:29:56.421150    8455 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-565000"
	I0906 11:29:56.421092    8455 addons.go:234] Setting addon storage-provisioner=true in "addons-565000"
	I0906 11:29:56.421183    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421140    8455 addons.go:234] Setting addon volumesnapshots=true in "addons-565000"
	I0906 11:29:56.421186    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421223    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421072    8455 addons.go:69] Setting helm-tiller=true in profile "addons-565000"
	I0906 11:29:56.421259    8455 addons.go:234] Setting addon helm-tiller=true in "addons-565000"
	I0906 11:29:56.421078    8455 addons.go:69] Setting ingress=true in profile "addons-565000"
	I0906 11:29:56.421283    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421288    8455 addons.go:234] Setting addon ingress=true in "addons-565000"
	I0906 11:29:56.421061    8455 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-565000"
	I0906 11:29:56.421315    8455 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-565000"
	I0906 11:29:56.421318    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421101    8455 addons.go:234] Setting addon ingress-dns=true in "addons-565000"
	I0906 11:29:56.421333    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421377    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421065    8455 addons.go:234] Setting addon inspektor-gadget=true in "addons-565000"
	I0906 11:29:56.421442    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421561    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421580    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421588    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421599    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421604    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421620    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421621    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421640    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421665    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421675    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421686    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421687    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421701    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421101    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421103    8455 mustload.go:65] Loading cluster: addons-565000
	I0906 11:29:56.421745    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421098    8455 addons.go:69] Setting registry=true in profile "addons-565000"
	I0906 11:29:56.421752    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421769    8455 addons.go:234] Setting addon registry=true in "addons-565000"
	I0906 11:29:56.421768    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421774    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421786    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421798    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421136    8455 addons.go:69] Setting default-storageclass=true in profile "addons-565000"
	I0906 11:29:56.421907    8455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-565000"
	I0906 11:29:56.421147    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421125    8455 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-565000"
	I0906 11:29:56.421150    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421106    8455 config.go:182] Loaded profile config "addons-565000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:56.423237    8455 config.go:182] Loaded profile config "addons-565000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:56.423341    8455 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-565000"
	I0906 11:29:56.423482    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.423762    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425212    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.425267    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425268    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425266    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425183    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425774    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425852    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.425912    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.425918    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.426917    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.427026    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.427180    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.427278    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.436533    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53815
	I0906 11:29:56.440847    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.440997    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53817
	I0906 11:29:56.441095    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53818
	I0906 11:29:56.446302    8455 out.go:177] * Verifying Kubernetes components...
	I0906 11:29:56.446722    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53821
	I0906 11:29:56.446744    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53822
	I0906 11:29:56.446749    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.446754    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.446766    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53823
	I0906 11:29:56.447675    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.451778    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53824
	I0906 11:29:56.463442    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.452528    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53825
	I0906 11:29:56.456155    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53826
	I0906 11:29:56.456184    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53828
	I0906 11:29:56.456204    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53827
	I0906 11:29:56.458996    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53829
	I0906 11:29:56.460005    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53830
	I0906 11:29:56.464034    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464038    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.460211    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53831
	I0906 11:29:56.462325    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53832
	I0906 11:29:56.464157    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464210    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464238    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.464277    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464277    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464306    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464327    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464345    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464335    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464489    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464586    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464672    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464797    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464800    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464801    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464830    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464840    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464843    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464927    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464956    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53845
	I0906 11:29:56.464966    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464976    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465060    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465071    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465359    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.465412    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465445    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465522    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465531    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465537    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465546    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465592    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465600    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465609    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465672    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.465674    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465689    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465701    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465726    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465734    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465739    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465742    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465752    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465753    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465756    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465770    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.467198    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.467946    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.468145    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.468189    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.469703    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.468329    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.469669    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470044    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470058    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470063    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.470071    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.470081    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.470101    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470110    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.470135    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.470012    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.471587    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.471856    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.472158    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.472359    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.472285    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.472449    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.472588    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.472668    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.473592    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.473633    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.473665    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.474108    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.474253    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.474323    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.475200    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.475334    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.475320    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.475489    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.475579    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.475775    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.476017    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476133    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476204    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.476053    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476368    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476550    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476639    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476666    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476707    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476710    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.477228    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.477291    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.479695    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.481574    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.482596    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53847
	I0906 11:29:56.486165    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.486175    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53849
	I0906 11:29:56.489566    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.491994    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.492296    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.492390    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53851
	I0906 11:29:56.494937    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53852
	I0906 11:29:56.495375    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.497952    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.498199    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.498212    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53855
	I0906 11:29:56.498498    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.498545    8455 addons.go:234] Setting addon default-storageclass=true in "addons-565000"
	I0906 11:29:56.498655    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.498795    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.498812    8455 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-565000"
	I0906 11:29:56.498881    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.498953    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.503640    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.503655    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.503819    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53857
	I0906 11:29:56.503818    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53858
	I0906 11:29:56.503866    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:56.503881    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.503930    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.504693    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.505873    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.506092    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.506123    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.505841    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.506431    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53861
	I0906 11:29:56.506491    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.506582    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.506682    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.506719    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.506881    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.506957    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.507072    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.512514    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.512637    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.513745    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.514156    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.514236    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.514425    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.514677    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.514617    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53863
	I0906 11:29:56.514665    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.514745    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.514866    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.514769    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.514981    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.515008    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53864
	I0906 11:29:56.515075    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.515416    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.515405    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.515444    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.515449    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.515557    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.515575    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.528277    8455 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 11:29:56.518995    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.520950    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.521007    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53867
	I0906 11:29:56.521018    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.521058    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.521449    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.521847    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.521974    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.521998    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.522067    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.522909    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53868
	I0906 11:29:56.524440    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53869
	I0906 11:29:56.527741    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53870
	I0906 11:29:56.518367    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.528569    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53871
	I0906 11:29:56.528852    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.528988    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529134    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529139    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.529166    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.529227    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.529254    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.529500    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.529576    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.529659    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.549483    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.529690    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.549490    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.529752    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529768    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.529789    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.549590    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529796    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.530590    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.530657    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.549721    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.530737    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.532082    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53877
	I0906 11:29:56.549180    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 11:29:56.549863    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.549977    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.586672    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.550052    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.623422    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.549539    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.550155    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.550293    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.550296    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.623527    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.550342    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.550486    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.550944    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.586321    8455 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 11:29:56.623628    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 11:29:56.623643    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.586834    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.586960    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.623188    8455 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 11:29:56.623197    8455 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 11:29:56.623504    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.623515    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.623770    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.623877    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.623909    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.623912    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.623934    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.623937    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.624060    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.625134    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.697573    8455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 11:29:56.697571    8455 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 11:29:56.697618    8455 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 11:29:56.698304    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.698666    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.698720    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.698924    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.700206    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.708965    8455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:29:56.709063    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 11:29:56.718376    8455 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 11:29:56.718543    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.718561    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.718708    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.718729    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.755320    8455 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 11:29:56.755573    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 11:29:56.755605    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.755619    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.755661    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 11:29:56.776651    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.755557    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.755857    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.756014    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.756001    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.756009    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.756009    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.776941    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.776939    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.756733    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.758418    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.776993    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.777037    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.765000    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53880
	I0906 11:29:56.777056    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.777060    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.777080    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.776388    8455 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 11:29:56.777160    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.777196    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.777342    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.777596    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.777605    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.778278    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.778315    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.797339    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:56.797806    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.797816    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.818188    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 11:29:56.818356    8455 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 11:29:56.818392    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.818190    8455 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 11:29:56.818456    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.818302    8455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:29:56.855556    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 11:29:56.818295    8455 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 11:29:56.855574    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.855585    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 11:29:56.818615    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.855619    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.818687    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.818879    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.855313    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 11:29:56.856276    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.856311    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.863337    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 11:29:56.864707    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53885
	I0906 11:29:56.876313    8455 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 11:29:56.876711    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.882651    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 11:29:56.897235    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 11:29:56.897414    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.904823    8455 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 11:29:56.934459    8455 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 11:29:56.934857    8455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 11:29:56.934897    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.935030    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.935094    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.935179    8455 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 11:29:56.935275    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.935345    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.935396    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.935459    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.935675    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.935773    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.935830    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.936040    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.936142    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.936304    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.936370    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.936455    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.936577    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.936747    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.936872    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.937001    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.937028    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.937369    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.937527    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.937672    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.937783    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.938306    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.938981    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.987318    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 11:29:56.987708    8455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 11:29:57.008385    8455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 11:29:57.008214    8455 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 11:29:57.008406    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.008410    8455 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 11:29:57.008226    8455 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 11:29:57.008429    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.008243    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:57.008227    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 11:29:57.008573    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.008606    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.009811    8455 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 11:29:57.029259    8455 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 11:29:57.050499    8455 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 11:29:57.050523    8455 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 11:29:57.087386    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.050662    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.050678    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.083696    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 11:29:57.087101    8455 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 11:29:57.087150    8455 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 11:29:57.124765    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 11:29:57.087568    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.087595    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.087597    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.116233    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 11:29:57.120011    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 11:29:57.124472    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 11:29:57.124822    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.145588    8455 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 11:29:57.145602    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 11:29:57.124858    8455 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 11:29:57.145617    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.125025    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.125023    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.125024    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.125026    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.145833    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.145859    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.145863    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.150688    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:29:57.166629    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.166637    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.166660    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.175366    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 11:29:57.187716    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 11:29:57.187816    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.187852    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.224402    8455 out.go:177]   - Using image docker.io/busybox:stable
	I0906 11:29:57.224733    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.238083    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 11:29:57.245461    8455 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 11:29:57.245234    8455 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 11:29:57.266380    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 11:29:57.287436    8455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 11:29:57.287524    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 11:29:57.287545    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.287691    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.287787    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.287875    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.287976    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.297104    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 11:29:57.297115    8455 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 11:29:57.315582    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 11:29:57.315593    8455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 11:29:57.329712    8455 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 11:29:57.329724    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 11:29:57.329739    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.329868    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.329941    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.330039    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.330119    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.342994    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 11:29:57.343005    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 11:29:57.387084    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 11:29:57.391094    8455 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 11:29:57.391106    8455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 11:29:57.405605    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 11:29:57.423540    8455 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 11:29:57.423552    8455 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 11:29:57.423931    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 11:29:57.445048    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 11:29:57.483904    8455 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 11:29:57.483918    8455 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 11:29:57.484726    8455 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 11:29:57.484735    8455 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 11:29:57.486807    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 11:29:57.492154    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 11:29:57.492167    8455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 11:29:57.545077    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 11:29:57.559273    8455 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0906 11:29:57.559897    8455 node_ready.go:35] waiting up to 6m0s for node "addons-565000" to be "Ready" ...
	I0906 11:29:57.567128    8455 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 11:29:57.567140    8455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 11:29:57.569715    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 11:29:57.584195    8455 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 11:29:57.584208    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 11:29:57.588706    8455 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 11:29:57.588718    8455 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 11:29:57.603058    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 11:29:57.624035    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 11:29:57.624055    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 11:29:57.624078    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.624240    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.624383    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.624486    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.624599    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.638014    8455 node_ready.go:49] node "addons-565000" has status "Ready":"True"
	I0906 11:29:57.638028    8455 node_ready.go:38] duration metric: took 78.116304ms for node "addons-565000" to be "Ready" ...
	I0906 11:29:57.638037    8455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	W0906 11:29:57.654349    8455 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-565000" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0906 11:29:57.654362    8455 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0906 11:29:57.668286    8455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace to be "Ready" ...
	I0906 11:29:57.693200    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 11:29:57.697036    8455 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 11:29:57.697049    8455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 11:29:57.795571    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 11:29:57.814012    8455 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 11:29:57.814024    8455 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 11:29:57.828048    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 11:29:57.841022    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 11:29:57.944816    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 11:29:57.944829    8455 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 11:29:58.136317    8455 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 11:29:58.136329    8455 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 11:29:58.406130    8455 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:58.406143    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 11:29:58.783795    8455 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 11:29:58.783808    8455 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 11:29:58.850827    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 11:29:58.850841    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 11:29:58.966562    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:59.027963    8455 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 11:29:59.027976    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 11:29:59.150576    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 11:29:59.150591    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 11:29:59.152813    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.217849688s)
	I0906 11:29:59.152838    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.152845    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.152955    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.218115416s)
	I0906 11:29:59.152983    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.152990    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.153002    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.153000    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153022    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.153042    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.153051    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.153137    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153142    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.153146    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.153172    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.153179    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.153257    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153266    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.153268    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.153350    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153362    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.255176    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 11:29:59.255190    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 11:29:59.267630    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 11:29:59.341072    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 11:29:59.341086    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 11:29:59.566332    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.478873298s)
	I0906 11:29:59.566359    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.566366    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.566543    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.566543    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.566552    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.566559    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.566563    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.566706    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.566717    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.605212    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 11:29:59.605224    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 11:29:59.672786    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:59.963492    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 11:29:59.963505    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 11:30:00.324939    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 11:30:00.324958    8455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 11:30:00.591312    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.46645806s)
	I0906 11:30:00.591344    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.591356    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.591527    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.591536    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.591543    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.591547    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.591569    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.591700    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.591716    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.591726    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.622405    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 11:30:00.622418    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 11:30:00.775030    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.608503239s)
	I0906 11:30:00.775066    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775078    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775086    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.369460747s)
	I0906 11:30:00.775114    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775139    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775369    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775381    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.775409    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.775410    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775427    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775446    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775467    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.775484    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775495    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775656    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.775685    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775699    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.775730    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775764    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.785954    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.785974    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.786133    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.786184    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.786196    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:01.010438    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 11:30:01.010451    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 11:30:01.241014    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 11:30:01.241031    8455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 11:30:01.544389    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 11:30:01.756601    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:03.776050    8455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 11:30:03.776072    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:30:03.776222    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:30:03.776327    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:30:03.776425    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:30:03.776520    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:30:04.056811    8455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 11:30:04.181174    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:04.222147    8455 addons.go:234] Setting addon gcp-auth=true in "addons-565000"
	I0906 11:30:04.222180    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:30:04.222463    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:30:04.222480    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:30:04.231735    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53901
	I0906 11:30:04.232095    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:30:04.232496    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:30:04.232512    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:30:04.232739    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:30:04.233145    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:30:04.233163    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:30:04.242121    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53903
	I0906 11:30:04.242462    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:30:04.242817    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:30:04.242840    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:30:04.243079    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:30:04.243208    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:30:04.243293    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:30:04.243381    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:30:04.244374    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:30:04.244540    8455 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 11:30:04.244552    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:30:04.244639    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:30:04.244719    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:30:04.244834    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:30:04.244908    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:30:05.084263    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.66033458s)
	I0906 11:30:05.084298    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084305    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084321    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.514601292s)
	I0906 11:30:05.084350    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084362    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084399    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.391204303s)
	I0906 11:30:05.084422    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084432    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084499    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.288920476s)
	I0906 11:30:05.084532    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084545    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.084547    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084548    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084592    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084591    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.084601    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084606    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084610    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084616    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084620    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084628    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084556    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.256506457s)
	I0906 11:30:05.084653    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084663    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084668    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084683    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084689    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.084693    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084747    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084802    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084810    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084818    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084825    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084825    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085009    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085039    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085050    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085070    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.085080    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.085083    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085104    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085111    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085121    8455 addons.go:475] Verifying addon ingress=true in "addons-565000"
	I0906 11:30:05.085126    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085362    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085389    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085361    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085413    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085419    8455 addons.go:475] Verifying addon metrics-server=true in "addons-565000"
	I0906 11:30:05.085472    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085482    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085594    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085629    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085637    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085647    8455 addons.go:475] Verifying addon registry=true in "addons-565000"
	I0906 11:30:05.085229    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.086132    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.125960    8455 out.go:177] * Verifying ingress addon...
	I0906 11:30:05.168796    8455 out.go:177] * Verifying registry addon...
	I0906 11:30:05.209883    8455 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-565000 service yakd-dashboard -n yakd-dashboard
	
	I0906 11:30:05.231457    8455 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 11:30:05.268300    8455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 11:30:05.314015    8455 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 11:30:05.314040    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:05.314422    8455 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 11:30:05.314431    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:05.334518    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.334532    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.334682    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.334689    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.334689    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.752240    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:05.783002    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.207614    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:06.265464    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:06.385025    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.744791    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:06.845743    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.935273    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.094254469s)
	I0906 11:30:06.935304    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935312    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935329    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.968766848s)
	W0906 11:30:06.935351    8455 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 11:30:06.935380    8455 retry.go:31] will retry after 268.697102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 11:30:06.935393    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.667764991s)
	I0906 11:30:06.935414    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935426    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935489    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:06.935497    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935506    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:06.935517    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935524    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935574    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935581    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:06.935589    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935595    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935652    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935665    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:06.935683    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:06.935748    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935757    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:07.204733    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:30:07.252648    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:07.385674    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:07.574377    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.029977227s)
	I0906 11:30:07.574403    8455 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.329860607s)
	I0906 11:30:07.574405    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:07.574430    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:07.574595    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:07.574596    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:07.574607    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:07.574617    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:07.574624    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:07.574761    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:07.574773    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:07.574774    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:07.574792    8455 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-565000"
	I0906 11:30:07.601380    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:30:07.658168    8455 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 11:30:07.700267    8455 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 11:30:07.700897    8455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 11:30:07.721151    8455 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 11:30:07.721166    8455 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 11:30:07.756600    8455 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 11:30:07.756613    8455 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 11:30:07.762365    8455 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 11:30:07.762377    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:07.785903    8455 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 11:30:07.785915    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 11:30:07.793816    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:07.793930    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:07.808109    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 11:30:08.204144    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:08.235373    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:08.270969    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:08.620171    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.415414624s)
	I0906 11:30:08.620199    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.620208    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.620386    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:08.620392    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.620401    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.620409    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.620415    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.620552    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:08.620571    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.620580    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.678805    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:08.709812    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:08.825512    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:08.829000    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:08.847023    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.038895786s)
	I0906 11:30:08.847048    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.847057    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.847252    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:08.847253    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.847269    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.847278    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.847283    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.847402    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.847412    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.848332    8455 addons.go:475] Verifying addon gcp-auth=true in "addons-565000"
	I0906 11:30:08.874335    8455 out.go:177] * Verifying gcp-auth addon...
	I0906 11:30:08.931430    8455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 11:30:08.933534    8455 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 11:30:09.206156    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:09.233832    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:09.271889    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:09.705437    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:09.734949    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:09.805889    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:10.205862    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:10.233962    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:10.271951    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:10.704864    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:10.734864    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:10.771057    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:11.171641    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:11.203752    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:11.235696    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:11.270597    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:11.703861    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:11.734479    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:11.803604    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:12.205280    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:12.233961    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:12.271168    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:12.706218    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:12.735450    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:12.774856    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:13.172003    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:13.204890    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:13.234616    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:13.273404    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:13.704172    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:13.735028    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:13.770364    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:14.204988    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:14.233800    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:14.271674    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:14.703819    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:14.733896    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:14.770960    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:15.203928    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:15.233774    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:15.270433    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:15.672262    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:15.704340    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:15.734107    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:15.772110    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:16.204785    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:16.306886    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:16.307019    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:16.703600    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:16.734760    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:16.772040    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:17.204393    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:17.234575    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:17.273015    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:17.704365    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:17.734294    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:17.770680    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:18.172614    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:18.204611    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:18.234096    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:18.272183    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:18.704409    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:18.734073    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:18.770995    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:19.204012    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:19.234554    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:19.270456    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:19.704292    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:19.734326    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:19.770747    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:20.173772    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:20.204025    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:20.233976    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:20.271106    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:20.704645    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:20.734562    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:20.770462    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:21.204117    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:21.234104    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:21.270675    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:21.704037    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:21.734645    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:21.803534    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:22.204915    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.234721    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:22.271951    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:22.672648    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:22.704206    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.735832    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:22.770624    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:23.203657    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:23.234885    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:23.272603    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:23.705598    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:23.735416    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:23.771798    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:24.204841    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:24.233853    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:24.271588    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:24.673454    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:24.785153    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:24.785382    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:24.785727    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.204225    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.238348    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:25.271567    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:25.705677    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.734921    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:25.771626    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:26.203921    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:26.234233    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:26.271127    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:26.704385    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:26.734361    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:26.772578    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:27.172737    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:27.204532    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:27.234268    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:27.360774    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:27.705680    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:27.734652    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:27.773251    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:28.204118    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:28.234692    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:28.270814    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:28.704132    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:28.735332    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:28.771420    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:29.206138    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:29.234411    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:29.270328    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:29.673427    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:29.704527    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:29.734053    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:29.771106    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:30.203996    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:30.235508    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:30.335802    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:30.705206    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:30.735024    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:30.772031    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:31.203946    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:31.234238    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:31.270704    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:31.674698    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:31.704817    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:31.735357    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:31.770457    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:32.204274    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:32.234338    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:32.271635    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:32.703723    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:32.735067    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:32.771147    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:33.204515    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:33.235946    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:33.270477    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:33.704384    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:33.733984    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:33.770661    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:34.172308    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:34.205353    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:34.235316    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:34.272336    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:34.704555    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:34.735148    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:34.770398    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:35.204712    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:35.235536    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:35.270625    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:35.766294    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:35.766350    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:35.771534    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:36.173751    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:36.207299    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:36.235859    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:36.273799    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:36.705316    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:36.735271    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:36.771004    8455 kapi.go:107] duration metric: took 31.502792788s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 11:30:37.205426    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:37.234825    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:37.672676    8455 pod_ready.go:93] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.672688    8455 pod_ready.go:82] duration metric: took 40.004500142s for pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.672695    8455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k8jth" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.675959    8455 pod_ready.go:93] pod "coredns-6f6b679f8f-k8jth" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.675969    8455 pod_ready.go:82] duration metric: took 3.268936ms for pod "coredns-6f6b679f8f-k8jth" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.675976    8455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.679319    8455 pod_ready.go:93] pod "etcd-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.679328    8455 pod_ready.go:82] duration metric: took 3.348271ms for pod "etcd-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.679335    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.682816    8455 pod_ready.go:93] pod "kube-apiserver-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.682825    8455 pod_ready.go:82] duration metric: took 3.485903ms for pod "kube-apiserver-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.682831    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.685824    8455 pod_ready.go:93] pod "kube-controller-manager-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.685833    8455 pod_ready.go:82] duration metric: took 2.997205ms for pod "kube-controller-manager-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.685839    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ngbg9" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.703539    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:37.734759    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.073069    8455 pod_ready.go:93] pod "kube-proxy-ngbg9" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:38.073082    8455 pod_ready.go:82] duration metric: took 387.228788ms for pod "kube-proxy-ngbg9" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.073089    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.204091    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:38.234507    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.470751    8455 pod_ready.go:93] pod "kube-scheduler-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:38.470764    8455 pod_ready.go:82] duration metric: took 397.672182ms for pod "kube-scheduler-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.470771    8455 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2d5x4" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.704619    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:38.735824    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.871317    8455 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-2d5x4" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:38.871329    8455 pod_ready.go:82] duration metric: took 400.552932ms for pod "nvidia-device-plugin-daemonset-2d5x4" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.871334    8455 pod_ready.go:39] duration metric: took 41.233405742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:30:38.871358    8455 api_server.go:52] waiting for apiserver process to appear ...
	I0906 11:30:38.871410    8455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:30:38.891002    8455 api_server.go:72] duration metric: took 42.470167316s to wait for apiserver process to appear ...
	I0906 11:30:38.891015    8455 api_server.go:88] waiting for apiserver healthz status ...
	I0906 11:30:38.891032    8455 api_server.go:253] Checking apiserver healthz at https://192.169.0.21:8443/healthz ...
	I0906 11:30:38.894959    8455 api_server.go:279] https://192.169.0.21:8443/healthz returned 200:
	ok
	I0906 11:30:38.895597    8455 api_server.go:141] control plane version: v1.31.0
	I0906 11:30:38.895607    8455 api_server.go:131] duration metric: took 4.587861ms to wait for apiserver health ...
	I0906 11:30:38.895612    8455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 11:30:39.076644    8455 system_pods.go:59] 19 kube-system pods found
	I0906 11:30:39.076664    8455 system_pods.go:61] "coredns-6f6b679f8f-jjpz5" [cb713a3d-2e0e-4205-9273-5b2a6393fe7e] Running
	I0906 11:30:39.076668    8455 system_pods.go:61] "coredns-6f6b679f8f-k8jth" [2bad9a21-c9b3-41db-ba12-73a2a482ea5f] Running
	I0906 11:30:39.076673    8455 system_pods.go:61] "csi-hostpath-attacher-0" [32c64849-b5bc-4889-b0bd-bff533458c95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 11:30:39.076679    8455 system_pods.go:61] "csi-hostpath-resizer-0" [35ca7619-3cef-4ad1-886a-73bfe39cfdc9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 11:30:39.076684    8455 system_pods.go:61] "csi-hostpathplugin-s2s7r" [abafc424-abbe-4002-9dea-b3e02be0fbf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 11:30:39.076688    8455 system_pods.go:61] "etcd-addons-565000" [94c45814-5893-4ee7-9167-283943b85079] Running
	I0906 11:30:39.076692    8455 system_pods.go:61] "kube-apiserver-addons-565000" [a9cc7a60-afe4-4958-9310-4680206a7c8d] Running
	I0906 11:30:39.076695    8455 system_pods.go:61] "kube-controller-manager-addons-565000" [32aa0352-4089-4b7f-b4bb-c7a2bfab169a] Running
	I0906 11:30:39.076697    8455 system_pods.go:61] "kube-ingress-dns-minikube" [39524e97-a112-44d1-9b4b-fb721afeef8b] Running
	I0906 11:30:39.076700    8455 system_pods.go:61] "kube-proxy-ngbg9" [5ec63286-71d3-40f9-a90a-90e231b9fb68] Running
	I0906 11:30:39.076703    8455 system_pods.go:61] "kube-scheduler-addons-565000" [e2b538c2-7781-4d08-8478-7482c429c98e] Running
	I0906 11:30:39.076706    8455 system_pods.go:61] "metrics-server-84c5f94fbc-s7gvk" [97f3e6bc-d390-4600-b657-42ea5558f40a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 11:30:39.076710    8455 system_pods.go:61] "nvidia-device-plugin-daemonset-2d5x4" [1aa1523c-25e2-4776-be7b-082ac60d2875] Running
	I0906 11:30:39.076713    8455 system_pods.go:61] "registry-6fb4cdfc84-w9b9z" [9a2c23b0-9024-42ce-9924-42afdfdbc0de] Running
	I0906 11:30:39.076716    8455 system_pods.go:61] "registry-proxy-75mvw" [1241891a-f5f4-4e10-ad7b-3c7977e9bb11] Running
	I0906 11:30:39.076721    8455 system_pods.go:61] "snapshot-controller-56fcc65765-278m8" [6787b10c-0be2-4164-8a9b-3afbcce0c71b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.076727    8455 system_pods.go:61] "snapshot-controller-56fcc65765-4ljgq" [500841ed-6cb6-4a7a-87bd-4242010b2e9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.076730    8455 system_pods.go:61] "storage-provisioner" [2522ffe3-2a6a-4185-9d41-edaef41562e4] Running
	I0906 11:30:39.076733    8455 system_pods.go:61] "tiller-deploy-b48cc5f79-xhknj" [0749425f-a61d-4e70-9b16-7d3819962a26] Running
	I0906 11:30:39.076738    8455 system_pods.go:74] duration metric: took 181.121835ms to wait for pod list to return data ...
	I0906 11:30:39.076743    8455 default_sa.go:34] waiting for default service account to be created ...
	I0906 11:30:39.203780    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:39.234430    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:39.271315    8455 default_sa.go:45] found service account: "default"
	I0906 11:30:39.271329    8455 default_sa.go:55] duration metric: took 194.582184ms for default service account to be created ...
	I0906 11:30:39.271339    8455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 11:30:39.476332    8455 system_pods.go:86] 19 kube-system pods found
	I0906 11:30:39.476348    8455 system_pods.go:89] "coredns-6f6b679f8f-jjpz5" [cb713a3d-2e0e-4205-9273-5b2a6393fe7e] Running
	I0906 11:30:39.476353    8455 system_pods.go:89] "coredns-6f6b679f8f-k8jth" [2bad9a21-c9b3-41db-ba12-73a2a482ea5f] Running
	I0906 11:30:39.476357    8455 system_pods.go:89] "csi-hostpath-attacher-0" [32c64849-b5bc-4889-b0bd-bff533458c95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 11:30:39.476361    8455 system_pods.go:89] "csi-hostpath-resizer-0" [35ca7619-3cef-4ad1-886a-73bfe39cfdc9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 11:30:39.476366    8455 system_pods.go:89] "csi-hostpathplugin-s2s7r" [abafc424-abbe-4002-9dea-b3e02be0fbf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 11:30:39.476373    8455 system_pods.go:89] "etcd-addons-565000" [94c45814-5893-4ee7-9167-283943b85079] Running
	I0906 11:30:39.476390    8455 system_pods.go:89] "kube-apiserver-addons-565000" [a9cc7a60-afe4-4958-9310-4680206a7c8d] Running
	I0906 11:30:39.476397    8455 system_pods.go:89] "kube-controller-manager-addons-565000" [32aa0352-4089-4b7f-b4bb-c7a2bfab169a] Running
	I0906 11:30:39.476407    8455 system_pods.go:89] "kube-ingress-dns-minikube" [39524e97-a112-44d1-9b4b-fb721afeef8b] Running
	I0906 11:30:39.476412    8455 system_pods.go:89] "kube-proxy-ngbg9" [5ec63286-71d3-40f9-a90a-90e231b9fb68] Running
	I0906 11:30:39.476415    8455 system_pods.go:89] "kube-scheduler-addons-565000" [e2b538c2-7781-4d08-8478-7482c429c98e] Running
	I0906 11:30:39.476420    8455 system_pods.go:89] "metrics-server-84c5f94fbc-s7gvk" [97f3e6bc-d390-4600-b657-42ea5558f40a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 11:30:39.476424    8455 system_pods.go:89] "nvidia-device-plugin-daemonset-2d5x4" [1aa1523c-25e2-4776-be7b-082ac60d2875] Running
	I0906 11:30:39.476427    8455 system_pods.go:89] "registry-6fb4cdfc84-w9b9z" [9a2c23b0-9024-42ce-9924-42afdfdbc0de] Running
	I0906 11:30:39.476430    8455 system_pods.go:89] "registry-proxy-75mvw" [1241891a-f5f4-4e10-ad7b-3c7977e9bb11] Running
	I0906 11:30:39.476434    8455 system_pods.go:89] "snapshot-controller-56fcc65765-278m8" [6787b10c-0be2-4164-8a9b-3afbcce0c71b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.476440    8455 system_pods.go:89] "snapshot-controller-56fcc65765-4ljgq" [500841ed-6cb6-4a7a-87bd-4242010b2e9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.476444    8455 system_pods.go:89] "storage-provisioner" [2522ffe3-2a6a-4185-9d41-edaef41562e4] Running
	I0906 11:30:39.476448    8455 system_pods.go:89] "tiller-deploy-b48cc5f79-xhknj" [0749425f-a61d-4e70-9b16-7d3819962a26] Running
	I0906 11:30:39.476453    8455 system_pods.go:126] duration metric: took 205.110266ms to wait for k8s-apps to be running ...
	I0906 11:30:39.476462    8455 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 11:30:39.476514    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:30:39.488009    8455 system_svc.go:56] duration metric: took 11.543084ms WaitForService to wait for kubelet
	I0906 11:30:39.488032    8455 kubeadm.go:582] duration metric: took 43.067199604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:30:39.488044    8455 node_conditions.go:102] verifying NodePressure condition ...
	I0906 11:30:39.671936    8455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 11:30:39.671955    8455 node_conditions.go:123] node cpu capacity is 2
	I0906 11:30:39.671963    8455 node_conditions.go:105] duration metric: took 183.916079ms to run NodePressure ...
	I0906 11:30:39.671971    8455 start.go:241] waiting for startup goroutines ...
	I0906 11:30:39.703638    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:39.735064    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:40.268771    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:40.268922    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:40.705072    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:40.736475    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:41.206842    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:41.237376    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:41.703621    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:41.733946    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:42.207141    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:42.234295    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:42.704268    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:42.736920    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:43.204271    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:43.233552    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:43.704506    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:43.735026    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:44.205614    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:44.233707    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:44.704800    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:44.733749    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:45.205355    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:45.233623    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:45.703714    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:45.734341    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:46.207271    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:46.235224    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:46.703567    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:46.734418    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:47.204625    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:47.234864    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:47.704765    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:47.733579    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:48.204068    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:48.235115    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:48.704263    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:48.733636    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:49.205419    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:49.241936    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:49.704811    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:49.734045    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:50.204568    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:50.236082    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:50.703854    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:50.735363    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:51.206236    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:51.236495    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:51.707129    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:51.734259    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:52.204576    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:52.235116    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:52.704268    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:52.733841    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:53.203899    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:53.233990    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:53.704342    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:53.735162    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:54.204532    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:54.235844    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:54.706937    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:54.737422    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:55.203449    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:55.238555    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:55.706058    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:55.734489    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:56.203939    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:56.234295    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:56.704296    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:56.735555    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:57.203199    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:57.235753    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:57.704524    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:57.734332    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:58.203717    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:58.235780    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:58.704723    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:58.733918    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:59.203672    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:59.233488    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:59.704120    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:59.805685    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:00.204681    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:00.235144    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:00.706503    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:00.734871    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:01.204267    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:01.235365    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:01.703709    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:01.734124    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:02.203540    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:02.235837    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:02.708414    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:02.737377    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:03.205373    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:03.235921    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:03.706007    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:03.735619    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:04.203984    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:04.234288    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:04.704218    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:04.734423    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:05.203728    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:05.234481    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:05.703859    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:05.734201    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:06.203434    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:06.235977    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:06.705343    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:06.734854    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:07.203363    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:07.234475    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:07.704248    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:07.734330    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:08.203410    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:08.234172    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:08.703689    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:08.737380    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:09.205875    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:09.235651    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:09.703703    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:09.735137    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:10.203782    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:10.236135    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:10.704555    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:10.735942    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:11.204419    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:11.234612    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:11.703773    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:11.734688    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:12.204201    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:12.234606    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:12.704284    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:12.733826    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:13.206776    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:13.236507    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:13.704196    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:13.734845    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:14.204903    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:14.234475    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:14.703996    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:14.735384    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:15.205695    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:15.234663    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:15.705438    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:15.734494    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:16.205536    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:16.236202    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:16.705633    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:16.735165    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:17.204190    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:17.233719    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:17.704463    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:17.734113    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:18.204646    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:18.234919    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:18.705247    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:18.734739    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:19.205924    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:19.234703    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:19.704093    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:19.734859    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:20.204088    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:20.234601    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:20.704423    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:20.733910    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:21.205399    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:21.234683    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:21.703925    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:21.734396    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:22.203905    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:22.234151    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:22.703780    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:22.734060    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:23.204855    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:23.233860    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:23.703915    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:23.733847    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:24.203712    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:24.234122    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:24.703564    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:24.733634    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:25.204987    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:25.234667    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:25.703408    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:25.733780    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:26.204004    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:26.233606    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:26.703620    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:26.733901    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:27.203651    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:27.234534    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:27.704534    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:27.733944    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:28.206462    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:28.236101    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:28.703892    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:28.734231    8455 kapi.go:107] duration metric: took 1m23.503011028s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 11:31:29.229069    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:29.704294    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:30.203646    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:30.704201    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:31.204107    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:31.704354    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:32.204544    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:32.706333    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:33.207256    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:33.703920    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:34.204571    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:34.704469    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:35.204144    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:35.705153    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:36.203780    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:36.703991    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:37.203575    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:37.706037    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:38.208562    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:38.703757    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:39.203579    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:39.704238    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:40.205533    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:40.704962    8455 kapi.go:107] duration metric: took 1m33.00432836s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 11:32:53.933629    8455 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 11:32:53.933641    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:54.439721    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:54.937455    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:55.434033    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:55.933203    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:56.435390    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:56.934517    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:57.433420    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:57.933847    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:58.434214    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:58.934033    8455 kapi.go:107] duration metric: took 2m50.003085598s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 11:32:58.974535    8455 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-565000 cluster.
	I0906 11:32:59.014908    8455 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 11:32:59.036173    8455 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 11:32:59.093940    8455 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, helm-tiller, storage-provisioner, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volcano, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0906 11:32:59.136054    8455 addons.go:510] duration metric: took 3m2.71561894s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns helm-tiller storage-provisioner default-storageclass metrics-server yakd storage-provisioner-rancher volcano inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0906 11:32:59.136089    8455 start.go:246] waiting for cluster config update ...
	I0906 11:32:59.136114    8455 start.go:255] writing updated cluster config ...
	I0906 11:32:59.136473    8455 ssh_runner.go:195] Run: rm -f paused
	I0906 11:32:59.177790    8455 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0906 11:32:59.215143    8455 out.go:201] 
	W0906 11:32:59.236123    8455 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0906 11:32:59.257068    8455 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0906 11:32:59.336179    8455 out.go:177] * Done! kubectl is now configured to use "addons-565000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.403672042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.403779582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.403793613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.403982299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:32:58 addons-565000 cri-dockerd[1156]: time="2024-09-06T18:32:58Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.609253434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.609314776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.609328298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:32:58 addons-565000 dockerd[1266]: time="2024-09-06T18:32:58.610557866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:32:59 addons-565000 dockerd[1259]: time="2024-09-06T18:32:59.835341853Z" level=info msg="ignoring event" container=07551c53ede47007f107a64bc992c4f357a21d86d6a5c72896925ce6120ffdac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:32:59 addons-565000 dockerd[1266]: time="2024-09-06T18:32:59.835610267Z" level=info msg="shim disconnected" id=07551c53ede47007f107a64bc992c4f357a21d86d6a5c72896925ce6120ffdac namespace=moby
	Sep 06 18:32:59 addons-565000 dockerd[1266]: time="2024-09-06T18:32:59.835643606Z" level=warning msg="cleaning up after shim disconnected" id=07551c53ede47007f107a64bc992c4f357a21d86d6a5c72896925ce6120ffdac namespace=moby
	Sep 06 18:32:59 addons-565000 dockerd[1266]: time="2024-09-06T18:32:59.835650340Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:34:32 addons-565000 cri-dockerd[1156]: time="2024-09-06T18:34:32Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 06 18:34:32 addons-565000 dockerd[1266]: time="2024-09-06T18:34:32.901293140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 18:34:32 addons-565000 dockerd[1266]: time="2024-09-06T18:34:32.901628001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 18:34:32 addons-565000 dockerd[1266]: time="2024-09-06T18:34:32.901685259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:34:32 addons-565000 dockerd[1266]: time="2024-09-06T18:34:32.901815369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 18:34:33 addons-565000 dockerd[1259]: time="2024-09-06T18:34:33.899890657Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 06 18:34:33 addons-565000 dockerd[1259]: time="2024-09-06T18:34:33.899980569Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 06 18:34:33 addons-565000 dockerd[1266]: time="2024-09-06T18:34:33.908790497Z" level=info msg="shim disconnected" id=f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7 namespace=moby
	Sep 06 18:34:33 addons-565000 dockerd[1266]: time="2024-09-06T18:34:33.908858036Z" level=warning msg="cleaning up after shim disconnected" id=f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7 namespace=moby
	Sep 06 18:34:33 addons-565000 dockerd[1266]: time="2024-09-06T18:34:33.908867227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:34:33 addons-565000 dockerd[1259]: time="2024-09-06T18:34:33.909178089Z" level=info msg="ignoring event" container=f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:34:33 addons-565000 dockerd[1259]: time="2024-09-06T18:34:33.910023880Z" level=error msg="Error running exec ae9e7b1f669321336dd05eb7977b70e66910c34204150b46c47a79fc3ebe7781 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	f11edf970ac50       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            About a minute ago   Exited              gadget                                   5                   8c88e518cfacb       gadget-6j9m6
	d5be48ce48be9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 3 minutes ago        Running             gcp-auth                                 0                   f4b2c509afcb5       gcp-auth-89d5ffd79-9jkbf
	c200ca383bcf0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          4 minutes ago        Running             csi-snapshotter                          0                   3f697160b20b1       csi-hostpathplugin-s2s7r
	2793adc095f39       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          4 minutes ago        Running             csi-provisioner                          0                   3f697160b20b1       csi-hostpathplugin-s2s7r
	6d34a8b512dd7       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            4 minutes ago        Running             liveness-probe                           0                   3f697160b20b1       csi-hostpathplugin-s2s7r
	b9e85562934a1       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           4 minutes ago        Running             hostpath                                 0                   3f697160b20b1       csi-hostpathplugin-s2s7r
	8f4b69cc88e53       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                4 minutes ago        Running             node-driver-registrar                    0                   3f697160b20b1       csi-hostpathplugin-s2s7r
	fad5fad078b35       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         4 minutes ago        Running             admission                                0                   ca0dd3e61b34e       volcano-admission-77d7d48b68-tmx68
	e8ddd9c18d912       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             4 minutes ago        Running             controller                               0                   0bb8dda5e2a3f       ingress-nginx-controller-bc57996ff-kttz2
	849ab3f4ae24c       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               4 minutes ago        Running             volcano-scheduler                        0                   23639f2b44b93       volcano-scheduler-576bc46687-k5qvm
	88d88ebf8743f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   5 minutes ago        Running             csi-external-health-monitor-controller   0                   3f697160b20b1       csi-hostpathplugin-s2s7r
	6ab9ffb0019d7       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              5 minutes ago        Running             csi-resizer                              0                   c92d399c87f1c       csi-hostpath-resizer-0
	6fc3ce5804961       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             5 minutes ago        Running             csi-attacher                             0                   7f84310f81589       csi-hostpath-attacher-0
	9d61e349941da       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      5 minutes ago        Running             volcano-controllers                      0                   9889d6d33187c       volcano-controllers-56675bb4d5-tntd9
	c6b4843f2ae98       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   5 minutes ago        Exited              patch                                    0                   cf989fa0fd2dd       ingress-nginx-admission-patch-4nkbp
	d2ef4f5201e40       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   5 minutes ago        Exited              create                                   0                   13179db06ca9b       ingress-nginx-admission-create-5gcgq
	199e9b3d52beb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago        Running             volume-snapshot-controller               0                   7791c4d7ef37a       snapshot-controller-56fcc65765-278m8
	42345e8ddf267       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago        Running             volume-snapshot-controller               0                   d32a6962c647b       snapshot-controller-56fcc65765-4ljgq
	f5fc50cfeff28       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       5 minutes ago        Running             local-path-provisioner                   0                   e50b6524aca80       local-path-provisioner-86d989889c-tsbl5
	f4ae218923a64       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        5 minutes ago        Running             metrics-server                           0                   1fef8a40c6e8b       metrics-server-84c5f94fbc-s7gvk
	c36fc9de53606       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        5 minutes ago        Running             yakd                                     0                   787e61a72bb4e       yakd-dashboard-67d98fc6b-vz5pv
	a9968173210fc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              5 minutes ago        Running             registry-proxy                           0                   da3825162e3ae       registry-proxy-75mvw
	485efc75aa16c       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                                             5 minutes ago        Running             registry                                 0                   38e6f4bd9c83e       registry-6fb4cdfc84-w9b9z
	a0eac9b63ba60       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  5 minutes ago        Running             tiller                                   0                   6fffd90033024       tiller-deploy-b48cc5f79-xhknj
	17f6796db505f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             5 minutes ago        Running             minikube-ingress-dns                     0                   234eb79485cd1       kube-ingress-dns-minikube
	1c626a3b93f82       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               6 minutes ago        Running             cloud-spanner-emulator                   0                   8d5b732ac553c       cloud-spanner-emulator-769b77f747-gkpvx
	2cc88a4bc27ae       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     6 minutes ago        Running             nvidia-device-plugin-ctr                 0                   f9d5e7e422ccc       nvidia-device-plugin-daemonset-2d5x4
	ca501c42a673b       6e38f40d628db                                                                                                                                6 minutes ago        Running             storage-provisioner                      0                   ee35656ff4dab       storage-provisioner
	96b0436049bfd       cbb01a7bd410d                                                                                                                                6 minutes ago        Running             coredns                                  0                   50942594ce346       coredns-6f6b679f8f-jjpz5
	3e68275cea827       cbb01a7bd410d                                                                                                                                6 minutes ago        Running             coredns                                  0                   10e305e405cb8       coredns-6f6b679f8f-k8jth
	ec9f84a5d059c       ad83b2ca7b09e                                                                                                                                6 minutes ago        Running             kube-proxy                               0                   6aef182d0c233       kube-proxy-ngbg9
	df1ae4949b890       604f5db92eaa8                                                                                                                                6 minutes ago        Running             kube-apiserver                           0                   eed72b2e0cb8b       kube-apiserver-addons-565000
	7491eeec20a6a       1766f54c897f0                                                                                                                                6 minutes ago        Running             kube-scheduler                           0                   714f1f244b199       kube-scheduler-addons-565000
	2f4f1dc29a3a5       045733566833c                                                                                                                                6 minutes ago        Running             kube-controller-manager                  0                   eca5c27fe48ba       kube-controller-manager-addons-565000
	bd4a544e47c77       2e96e5913fc06                                                                                                                                6 minutes ago        Running             etcd                                     0                   44c5486d095c1       etcd-addons-565000
	
	
	==> controller_ingress [e8ddd9c18d91] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0906 18:31:27.840683       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0906 18:31:27.840779       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0906 18:31:27.845119       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/amd64"
	I0906 18:31:28.058295       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0906 18:31:28.072348       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0906 18:31:28.078984       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0906 18:31:28.090015       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5108b76c-d002-4edb-be2c-40002f23dad1", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0906 18:31:28.091406       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"e715ab1b-f17f-4030-bc42-c6415657141d", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0906 18:31:28.091694       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"2c1d064e-0801-4483-a6f9-badd406a30d8", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0906 18:31:29.281363       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0906 18:31:29.281477       7 nginx.go:317] "Starting NGINX process"
	I0906 18:31:29.281807       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0906 18:31:29.281942       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0906 18:31:29.292146       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0906 18:31:29.292482       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-kttz2"
	I0906 18:31:29.297386       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-kttz2" node="addons-565000"
	I0906 18:31:29.315102       7 controller.go:213] "Backend successfully reloaded"
	I0906 18:31:29.315175       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0906 18:31:29.315377       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-kttz2", UID:"e7422f4d-ede8-4441-ad82-d74b6e64868a", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [3e68275cea82] <==
	Trace[887105750]: [30.0015111s] [30.0015111s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1399447648]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.825) (total time: 30001ms):
	Trace[1399447648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:30:28.825)
	Trace[1399447648]: [30.001703327s] [30.001703327s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[281883191]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.825) (total time: 30001ms):
	Trace[281883191]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.826)
	Trace[281883191]: [30.001720152s] [30.001720152s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:53734 - 19685 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122631s
	[INFO] 10.244.0.8:53734 - 51175 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074673s
	[INFO] 10.244.0.8:53020 - 19533 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100379s
	[INFO] 10.244.0.8:53020 - 14156 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004184s
	[INFO] 10.244.0.8:60138 - 43494 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006767s
	[INFO] 10.244.0.8:60138 - 65253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036928s
	[INFO] 10.244.0.8:60822 - 7923 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108181s
	[INFO] 10.244.0.8:60822 - 15857 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000045526s
	[INFO] 10.244.0.8:38552 - 55881 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037007s
	[INFO] 10.244.0.8:38552 - 22607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000021723s
	[INFO] 10.244.0.26:46025 - 53921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000208202s
	[INFO] 10.244.0.26:34684 - 54921 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162793s
	[INFO] 10.244.0.26:56877 - 37503 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000979085s
	
	
	==> coredns [96b0436049bf] <==
	[INFO] plugin/kubernetes: Trace[868897144]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.985) (total time: 30001ms):
	Trace[868897144]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.986)
	Trace[868897144]: [30.001513479s] [30.001513479s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1446559473]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.985) (total time: 30001ms):
	Trace[1446559473]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.986)
	Trace[1446559473]: [30.001200251s] [30.001200251s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[3925288]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.985) (total time: 30001ms):
	Trace[3925288]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.987)
	Trace[3925288]: [30.001663646s] [30.001663646s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:46408 - 4699 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142282s
	[INFO] 10.244.0.8:46408 - 41029 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047596s
	[INFO] 10.244.0.8:39697 - 48378 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070047s
	[INFO] 10.244.0.8:39697 - 53501 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042703s
	[INFO] 10.244.0.8:43424 - 3655 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079105s
	[INFO] 10.244.0.8:43424 - 37445 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043667s
	[INFO] 10.244.0.26:54180 - 6536 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180728s
	[INFO] 10.244.0.26:57220 - 58876 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084468s
	[INFO] 10.244.0.26:58974 - 20348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111158s
	[INFO] 10.244.0.26:59652 - 30528 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000047741s
	[INFO] 10.244.0.26:51541 - 50008 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00055438s
	
	
	==> describe nodes <==
	Name:               addons-565000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-565000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-565000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_29_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-565000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-565000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-565000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:36:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:33:25 +0000   Fri, 06 Sep 2024 18:29:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:33:25 +0000   Fri, 06 Sep 2024 18:29:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:33:25 +0000   Fri, 06 Sep 2024 18:29:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:33:25 +0000   Fri, 06 Sep 2024 18:29:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.21
	  Hostname:    addons-565000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba8d7ebbe40f4db881630d84cd9f9b07
	  System UUID:                a75d4f7d-0000-0000-aa62-647125e97870
	  Boot ID:                    43c43a22-afbf-4d48-a2ee-e3ce87181c76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-gkpvx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  gadget                      gadget-6j9m6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  gcp-auth                    gcp-auth-89d5ffd79-9jkbf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-kttz2    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         6m12s
	  kube-system                 coredns-6f6b679f8f-jjpz5                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m20s
	  kube-system                 coredns-6f6b679f8f-k8jth                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m20s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 csi-hostpathplugin-s2s7r                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 etcd-addons-565000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m26s
	  kube-system                 kube-apiserver-addons-565000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-addons-565000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-ngbg9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-addons-565000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 metrics-server-84c5f94fbc-s7gvk             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         6m14s
	  kube-system                 nvidia-device-plugin-daemonset-2d5x4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 registry-6fb4cdfc84-w9b9z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 registry-proxy-75mvw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 snapshot-controller-56fcc65765-278m8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 snapshot-controller-56fcc65765-4ljgq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 tiller-deploy-b48cc5f79-xhknj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  local-path-storage          local-path-provisioner-86d989889c-tsbl5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  volcano-system              volcano-admission-77d7d48b68-tmx68          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  volcano-system              volcano-controllers-56675bb4d5-tntd9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  volcano-system              volcano-scheduler-576bc46687-k5qvm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-vz5pv              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  0 (0%)
	  memory             658Mi (17%)  596Mi (15%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  Starting                 6m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node addons-565000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node addons-565000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node addons-565000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s                  kubelet          Node addons-565000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s                  kubelet          Node addons-565000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s                  kubelet          Node addons-565000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m23s                  kubelet          Node addons-565000 status is now: NodeReady
	  Normal  RegisteredNode           6m22s                  node-controller  Node addons-565000 event: Registered Node addons-565000 in Controller
	
	
	==> dmesg <==
	[  +3.728585] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.057326] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.593496] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[  +4.380146] systemd-fstab-generator[1626]: Ignoring "noauto" option for root device
	[  +0.056131] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.985215] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +0.072859] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.280632] systemd-fstab-generator[2154]: Ignoring "noauto" option for root device
	[  +0.071766] kauditd_printk_skb: 12 callbacks suppressed
	[Sep 6 18:30] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.016253] kauditd_printk_skb: 141 callbacks suppressed
	[  +9.031633] kauditd_printk_skb: 80 callbacks suppressed
	[ +14.066384] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.144245] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.012317] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.162700] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.139234] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:31] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.773109] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.780824] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.892089] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.547903] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 28 callbacks suppressed
	[ +41.821165] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.603321] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [bd4a544e47c7] <==
	{"level":"info","ts":"2024-09-06T18:29:56.531845Z","caller":"traceutil/trace.go:171","msg":"trace[1477146733] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"161.253602ms","start":"2024-09-06T18:29:56.370587Z","end":"2024-09-06T18:29:56.531840Z","steps":["trace[1477146733] 'process raft request'  (duration: 161.146633ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:29:56.531874Z","caller":"traceutil/trace.go:171","msg":"trace[1495975611] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"171.296299ms","start":"2024-09-06T18:29:56.360571Z","end":"2024-09-06T18:29:56.531867Z","steps":["trace[1495975611] 'process raft request'  (duration: 92.194046ms)","trace[1495975611] 'compare'  (duration: 78.841325ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:29:56.531938Z","caller":"traceutil/trace.go:171","msg":"trace[564360334] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"161.40619ms","start":"2024-09-06T18:29:56.370528Z","end":"2024-09-06T18:29:56.531934Z","steps":["trace[564360334] 'process raft request'  (duration: 161.160142ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:29:56.531960Z","caller":"traceutil/trace.go:171","msg":"trace[1802007709] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"153.312058ms","start":"2024-09-06T18:29:56.378644Z","end":"2024-09-06T18:29:56.531956Z","steps":["trace[1802007709] 'process raft request'  (duration: 153.107902ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:29:56.532026Z","caller":"traceutil/trace.go:171","msg":"trace[1089893951] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"161.457905ms","start":"2024-09-06T18:29:56.370564Z","end":"2024-09-06T18:29:56.532022Z","steps":["trace[1089893951] 'process raft request'  (duration: 161.150022ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:29:56.532033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.241837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6f6b679f8f-k8jth\" ","response":"range_response_count:1 size:3571"}
	{"level":"info","ts":"2024-09-06T18:29:56.532047Z","caller":"traceutil/trace.go:171","msg":"trace[966349755] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6f6b679f8f-k8jth; range_end:; response_count:1; response_revision:370; }","duration":"163.262516ms","start":"2024-09-06T18:29:56.368781Z","end":"2024-09-06T18:29:56.532043Z","steps":["trace[966349755] 'agreement among raft nodes before linearized reading'  (duration: 163.230349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:29:56.532105Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.747547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:29:56.532119Z","caller":"traceutil/trace.go:171","msg":"trace[81472969] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:370; }","duration":"126.760398ms","start":"2024-09-06T18:29:56.405354Z","end":"2024-09-06T18:29:56.532114Z","steps":["trace[81472969] 'agreement among raft nodes before linearized reading'  (duration: 126.741501ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:05.400669Z","caller":"traceutil/trace.go:171","msg":"trace[1180326389] linearizableReadLoop","detail":"{readStateIndex:758; appliedIndex:757; }","duration":"197.023064ms","start":"2024-09-06T18:30:05.203636Z","end":"2024-09-06T18:30:05.400659Z","steps":["trace[1180326389] 'read index received'  (duration: 138.856121ms)","trace[1180326389] 'applied index is now lower than readState.Index'  (duration: 58.166582ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:30:05.400799Z","caller":"traceutil/trace.go:171","msg":"trace[708230896] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"206.096247ms","start":"2024-09-06T18:30:05.194698Z","end":"2024-09-06T18:30:05.400794Z","steps":["trace[708230896] 'process raft request'  (duration: 147.722312ms)","trace[708230896] 'compare'  (duration: 58.192431ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:30:05.400887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.242992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:05.400901Z","caller":"traceutil/trace.go:171","msg":"trace[95253260] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:741; }","duration":"197.264939ms","start":"2024-09-06T18:30:05.203632Z","end":"2024-09-06T18:30:05.400897Z","steps":["trace[95253260] 'agreement among raft nodes before linearized reading'  (duration: 197.234026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:05.401920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.776734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx\" ","response":"range_response_count:1 size:1009"}
	{"level":"info","ts":"2024-09-06T18:30:05.401941Z","caller":"traceutil/trace.go:171","msg":"trace[1880089099] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx; range_end:; response_count:1; response_revision:742; }","duration":"145.799915ms","start":"2024-09-06T18:30:05.256136Z","end":"2024-09-06T18:30:05.401936Z","steps":["trace[1880089099] 'agreement among raft nodes before linearized reading'  (duration: 145.688306ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:05.402014Z","caller":"traceutil/trace.go:171","msg":"trace[1986133511] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"145.808796ms","start":"2024-09-06T18:30:05.256200Z","end":"2024-09-06T18:30:05.402008Z","steps":["trace[1986133511] 'process raft request'  (duration: 145.590545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:05.402370Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.317482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6f6b679f8f-jjpz5\" ","response":"range_response_count:1 size:5091"}
	{"level":"info","ts":"2024-09-06T18:30:05.402384Z","caller":"traceutil/trace.go:171","msg":"trace[764227157] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6f6b679f8f-jjpz5; range_end:; response_count:1; response_revision:742; }","duration":"116.332238ms","start":"2024-09-06T18:30:05.286048Z","end":"2024-09-06T18:30:05.402380Z","steps":["trace[764227157] 'agreement among raft nodes before linearized reading'  (duration: 116.307035ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:07.838065Z","caller":"traceutil/trace.go:171","msg":"trace[1244408162] transaction","detail":"{read_only:false; response_revision:929; number_of_response:1; }","duration":"147.065686ms","start":"2024-09-06T18:30:07.690989Z","end":"2024-09-06T18:30:07.838055Z","steps":["trace[1244408162] 'process raft request'  (duration: 145.182017ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:07.838385Z","caller":"traceutil/trace.go:171","msg":"trace[1088116875] transaction","detail":"{read_only:false; response_revision:930; number_of_response:1; }","duration":"147.162007ms","start":"2024-09-06T18:30:07.691218Z","end":"2024-09-06T18:30:07.838380Z","steps":["trace[1088116875] 'process raft request'  (duration: 147.10043ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:07.838532Z","caller":"traceutil/trace.go:171","msg":"trace[1157367360] transaction","detail":"{read_only:false; response_revision:931; number_of_response:1; }","duration":"147.273724ms","start":"2024-09-06T18:30:07.691254Z","end":"2024-09-06T18:30:07.838528Z","steps":["trace[1157367360] 'process raft request'  (duration: 147.088182ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:35.893117Z","caller":"traceutil/trace.go:171","msg":"trace[345980273] linearizableReadLoop","detail":"{readStateIndex:1085; appliedIndex:1084; }","duration":"118.48708ms","start":"2024-09-06T18:30:35.774616Z","end":"2024-09-06T18:30:35.893103Z","steps":["trace[345980273] 'read index received'  (duration: 118.349163ms)","trace[345980273] 'applied index is now lower than readState.Index'  (duration: 137.226µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:30:35.893231Z","caller":"traceutil/trace.go:171","msg":"trace[256057286] transaction","detail":"{read_only:false; response_revision:1061; number_of_response:1; }","duration":"118.810799ms","start":"2024-09-06T18:30:35.774413Z","end":"2024-09-06T18:30:35.893224Z","steps":["trace[256057286] 'process raft request'  (duration: 118.567955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:35.893386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.76113ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:35.893411Z","caller":"traceutil/trace.go:171","msg":"trace[1103130949] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1061; }","duration":"118.790592ms","start":"2024-09-06T18:30:35.774614Z","end":"2024-09-06T18:30:35.893404Z","steps":["trace[1103130949] 'agreement among raft nodes before linearized reading'  (duration: 118.753414ms)"],"step_count":1}
	
	
	==> gcp-auth [d5be48ce48be] <==
	2024/09/06 18:32:58 GCP Auth Webhook started!
	2024/09/06 18:33:14 Ready to marshal response ...
	2024/09/06 18:33:14 Ready to write response ...
	2024/09/06 18:33:15 Ready to marshal response ...
	2024/09/06 18:33:15 Ready to write response ...
	
	
	==> kernel <==
	 18:36:17 up 6 min,  0 users,  load average: 0.15, 0.65, 0.44
	Linux addons-565000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [df1ae4949b89] <==
	W0906 18:31:20.918585       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:21.963994       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:23.006366       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:24.020997       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:25.054186       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:26.113860       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:27.139901       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:28.187508       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:29.262390       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:30.269189       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:31.350675       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:31.853016       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:31:31.853045       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	W0906 18:31:31.854649       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:32.403298       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:33.415313       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:34.425178       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:32:11.924032       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:32:11.924079       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:11.983798       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:32:11.984126       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:53.779967       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:32:53.780057       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	I0906 18:33:14.880747       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0906 18:33:14.895991       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [2f4f1dc29a3a] <==
	I0906 18:32:12.007397       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:13.289708       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:13.307424       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:14.328311       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:14.444808       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:15.353817       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:15.449892       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:15.455490       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:15.465225       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:15.476989       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:16.358599       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:16.362704       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:16.375080       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:45.010842       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0906 18:32:45.030218       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0906 18:32:46.005754       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0906 18:32:46.018868       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0906 18:32:53.793694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="16.471619ms"
	I0906 18:32:53.806380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.164317ms"
	I0906 18:32:53.806447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="27.906µs"
	I0906 18:32:53.815198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="22.19µs"
	I0906 18:32:59.045404       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="17.866235ms"
	I0906 18:32:59.045457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="35.464µs"
	I0906 18:33:14.744129       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0906 18:33:25.860170       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-565000"
	
	
	==> kube-proxy [ec9f84a5d059] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:29:58.755524       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:29:58.785806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.21"]
	E0906 18:29:58.785875       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:29:58.923560       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:29:58.923609       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:29:58.923627       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:29:58.936971       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:29:58.937151       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:29:58.937159       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:29:58.938393       1 config.go:197] "Starting service config controller"
	I0906 18:29:58.938409       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:29:58.938423       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:29:58.938426       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:29:58.949921       1 config.go:326] "Starting node config controller"
	I0906 18:29:58.949931       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:29:59.038845       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:29:59.038875       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:29:59.050218       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7491eeec20a6] <==
	W0906 18:29:48.007678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:29:48.007861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.007876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:29:48.008041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.007966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 18:29:48.008215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.008102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:48.008440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.922727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 18:29:48.922876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.948745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:29:48.948964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.996465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:29:48.996631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.015909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:29:49.015967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.022235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:49.022302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.052321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:29:49.052365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.112247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:49.112297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.218398       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:29:49.218461       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 18:29:51.293148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:34:51 addons-565000 kubelet[2042]: E0906 18:34:51.560960    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	Sep 06 18:34:51 addons-565000 kubelet[2042]: E0906 18:34:51.574222    2042 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 18:34:51 addons-565000 kubelet[2042]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:34:51 addons-565000 kubelet[2042]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:34:51 addons-565000 kubelet[2042]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:34:51 addons-565000 kubelet[2042]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:35:03 addons-565000 kubelet[2042]: I0906 18:35:03.557307    2042 scope.go:117] "RemoveContainer" containerID="f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7"
	Sep 06 18:35:03 addons-565000 kubelet[2042]: E0906 18:35:03.557523    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	Sep 06 18:35:14 addons-565000 kubelet[2042]: I0906 18:35:14.556718    2042 scope.go:117] "RemoveContainer" containerID="f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7"
	Sep 06 18:35:14 addons-565000 kubelet[2042]: E0906 18:35:14.556855    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	Sep 06 18:35:18 addons-565000 kubelet[2042]: I0906 18:35:18.556368    2042 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-75mvw" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:35:22 addons-565000 kubelet[2042]: I0906 18:35:22.556702    2042 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-w9b9z" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:35:25 addons-565000 kubelet[2042]: I0906 18:35:25.556793    2042 scope.go:117] "RemoveContainer" containerID="f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7"
	Sep 06 18:35:25 addons-565000 kubelet[2042]: E0906 18:35:25.556951    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	Sep 06 18:35:38 addons-565000 kubelet[2042]: I0906 18:35:38.556296    2042 scope.go:117] "RemoveContainer" containerID="f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7"
	Sep 06 18:35:38 addons-565000 kubelet[2042]: E0906 18:35:38.556665    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	Sep 06 18:35:51 addons-565000 kubelet[2042]: I0906 18:35:51.557499    2042 scope.go:117] "RemoveContainer" containerID="f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7"
	Sep 06 18:35:51 addons-565000 kubelet[2042]: E0906 18:35:51.557699    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	Sep 06 18:35:51 addons-565000 kubelet[2042]: E0906 18:35:51.572974    2042 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 18:35:51 addons-565000 kubelet[2042]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:35:51 addons-565000 kubelet[2042]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:35:51 addons-565000 kubelet[2042]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:35:51 addons-565000 kubelet[2042]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:36:06 addons-565000 kubelet[2042]: I0906 18:36:06.556829    2042 scope.go:117] "RemoveContainer" containerID="f11edf970ac50616fc59f6832a3d54fc70306c29b4417d4940d9bad0cd0176e7"
	Sep 06 18:36:06 addons-565000 kubelet[2042]: E0906 18:36:06.557401    2042 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-6j9m6_gadget(c7dfb082-50ec-4c2d-8cdc-84392970a307)\"" pod="gadget/gadget-6j9m6" podUID="c7dfb082-50ec-4c2d-8cdc-84392970a307"
	
	
	==> storage-provisioner [ca501c42a673] <==
	I0906 18:30:03.543389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:03.559445       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:03.559470       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:03.575576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:03.575729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-565000_2f68d27c-ee72-40a1-9b10-86450d95520c!
	I0906 18:30:03.576811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a969d9b-e992-4942-aff3-78e2d12b7575", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-565000_2f68d27c-ee72-40a1-9b10-86450d95520c became leader
	I0906 18:30:03.676508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-565000_2f68d27c-ee72-40a1-9b10-86450d95520c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-565000 -n addons-565000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-565000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-565000 describe pod ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-565000 describe pod ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp test-job-nginx-0: exit status 1 (51.129481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5gcgq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4nkbp" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-565000 describe pod ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (198.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (74.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.637272ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-w9b9z" [9a2c23b0-9024-42ce-9924-42afdfdbc0de] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003827134s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-75mvw" [1241891a-f5f4-4e10-ad7b-3c7977e9bb11] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005801302s
addons_test.go:342: (dbg) Run:  kubectl --context addons-565000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-565000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-565000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.070447546s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-565000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 ip
2024/09/06 11:45:31 [DEBUG] GET http://192.169.0.21:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-565000 -n addons-565000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 logs -n 25: (3.134136237s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-747000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-747000              | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| start   | -o=json --download-only              | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | -p download-only-709000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-709000              | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-747000              | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-709000              | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| start   | --download-only -p                   | binary-mirror-050000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | binary-mirror-050000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53785               |                      |         |         |                     |                     |
	|         | --driver=hyperkit                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-050000              | binary-mirror-050000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| addons  | disable dashboard -p                 | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | addons-565000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | addons-565000                        |                      |         |         |                     |                     |
	| start   | -p addons-565000 --wait=true         | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:32 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=hyperkit  --addons=ingress  |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | addons-565000 addons                 | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:45 PDT | 06 Sep 24 11:45 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-565000 addons                 | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:45 PDT | 06 Sep 24 11:45 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-565000 addons disable         | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:45 PDT | 06 Sep 24 11:45 PDT |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| ip      | addons-565000 ip                     | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:45 PDT | 06 Sep 24 11:45 PDT |
	| addons  | addons-565000 addons disable         | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:45 PDT | 06 Sep 24 11:45 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-565000        | jenkins | v1.34.0 | 06 Sep 24 11:45 PDT |                     |
	|         | -p addons-565000                     |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:29:15
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:29:15.883388    8455 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:29:15.883642    8455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:15.883648    8455 out.go:358] Setting ErrFile to fd 2...
	I0906 11:29:15.883652    8455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:15.883826    8455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 11:29:15.885262    8455 out.go:352] Setting JSON to false
	I0906 11:29:15.907483    8455 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8926,"bootTime":1725638429,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 11:29:15.907577    8455 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:29:15.929590    8455 out.go:177] * [addons-565000] minikube v1.34.0 on Darwin 14.6.1
	I0906 11:29:15.971093    8455 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:29:15.971196    8455 notify.go:220] Checking for updates...
	I0906 11:29:16.012914    8455 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:29:16.034449    8455 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 11:29:16.055383    8455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:29:16.078183    8455 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 11:29:16.099281    8455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:29:16.120935    8455 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:29:16.151256    8455 out.go:177] * Using the hyperkit driver based on user configuration
	I0906 11:29:16.193440    8455 start.go:297] selected driver: hyperkit
	I0906 11:29:16.193465    8455 start.go:901] validating driver "hyperkit" against <nil>
	I0906 11:29:16.193483    8455 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:29:16.197925    8455 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:16.198041    8455 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 11:29:16.206646    8455 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 11:29:16.210585    8455 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:16.210604    8455 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 11:29:16.210634    8455 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:29:16.210830    8455 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:29:16.210890    8455 cni.go:84] Creating CNI manager for ""
	I0906 11:29:16.210904    8455 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:16.210912    8455 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 11:29:16.210985    8455 start.go:340] cluster config:
	{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I0906 11:29:16.211074    8455 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:16.232228    8455 out.go:177] * Starting "addons-565000" primary control-plane node in "addons-565000" cluster
	I0906 11:29:16.253269    8455 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:16.253321    8455 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 11:29:16.253338    8455 cache.go:56] Caching tarball of preloaded images
	I0906 11:29:16.253517    8455 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 11:29:16.253531    8455 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 11:29:16.253854    8455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/config.json ...
	I0906 11:29:16.253877    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/config.json: {Name:mkafc2af2209682ce31cfc92a3c90d867b3f2254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:16.254271    8455 start.go:360] acquireMachinesLock for addons-565000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 11:29:16.254423    8455 start.go:364] duration metric: took 135.711µs to acquireMachinesLock for "addons-565000"
	I0906 11:29:16.254448    8455 start.go:93] Provisioning new machine with config: &{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:29:16.254514    8455 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 11:29:16.296199    8455 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 11:29:16.296445    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:16.296512    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:16.306646    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53792
	I0906 11:29:16.306979    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:16.307387    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:16.307396    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:16.307634    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:16.307755    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:16.307849    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:16.307965    8455 start.go:159] libmachine.API.Create for "addons-565000" (driver="hyperkit")
	I0906 11:29:16.307991    8455 client.go:168] LocalClient.Create starting
	I0906 11:29:16.308030    8455 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 11:29:16.380732    8455 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 11:29:16.444278    8455 main.go:141] libmachine: Running pre-create checks...
	I0906 11:29:16.444287    8455 main.go:141] libmachine: (addons-565000) Calling .PreCreateCheck
	I0906 11:29:16.444485    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:16.444578    8455 main.go:141] libmachine: (addons-565000) Calling .GetConfigRaw
	I0906 11:29:16.444992    8455 main.go:141] libmachine: Creating machine...
	I0906 11:29:16.445003    8455 main.go:141] libmachine: (addons-565000) Calling .Create
	I0906 11:29:16.445091    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:16.445216    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.445088    8463 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 11:29:16.445281    8455 main.go:141] libmachine: (addons-565000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 11:29:16.637769    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.637670    8463 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa...
	I0906 11:29:16.794664    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.794585    8463 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/addons-565000.rawdisk...
	I0906 11:29:16.794686    8455 main.go:141] libmachine: (addons-565000) DBG | Writing magic tar header
	I0906 11:29:16.794696    8455 main.go:141] libmachine: (addons-565000) DBG | Writing SSH key tar header
	I0906 11:29:16.795443    8455 main.go:141] libmachine: (addons-565000) DBG | I0906 11:29:16.795365    8463 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000 ...
	I0906 11:29:17.182872    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:17.182898    8455 main.go:141] libmachine: (addons-565000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/hyperkit.pid
	I0906 11:29:17.182987    8455 main.go:141] libmachine: (addons-565000) DBG | Using UUID a75d8b75-3333-4f7d-aa62-647125e97870
	I0906 11:29:17.412086    8455 main.go:141] libmachine: (addons-565000) DBG | Generated MAC ae:ba:57:3d:b2:90
	I0906 11:29:17.412126    8455 main.go:141] libmachine: (addons-565000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-565000
	I0906 11:29:17.412238    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75d8b75-3333-4f7d-aa62-647125e97870", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 11:29:17.412301    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a75d8b75-3333-4f7d-aa62-647125e97870", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 11:29:17.412374    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a75d8b75-3333-4f7d-aa62-647125e97870", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/addons-565000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/addons-565000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-565000"}
	I0906 11:29:17.412419    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a75d8b75-3333-4f7d-aa62-647125e97870 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/addons-565000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=addons-565000"
	I0906 11:29:17.412441    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 11:29:17.415233    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 DEBUG: hyperkit: Pid is 8468
	I0906 11:29:17.415677    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 0
	I0906 11:29:17.415691    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:17.415745    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:17.416589    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:17.416679    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:17.416694    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:17.416703    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:17.416714    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:17.416740    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:17.416752    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:17.416775    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:17.416787    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:17.416795    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:17.416803    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:17.416824    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:17.416833    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:17.416840    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:17.416848    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:17.416856    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:17.416863    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:17.416868    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:17.416882    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:17.416893    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:17.416917    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:17.422855    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 11:29:17.473882    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 11:29:17.474516    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 11:29:17.474538    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 11:29:17.474546    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 11:29:17.474554    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:17 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 11:29:18.007896    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 11:29:18.007909    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 11:29:18.123927    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 11:29:18.123963    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 11:29:18.123978    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 11:29:18.123991    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 11:29:18.124837    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 11:29:18.124847    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:18 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 11:29:19.417139    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 1
	I0906 11:29:19.417154    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:19.417210    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:19.417995    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:19.418020    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:19.418032    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:19.418044    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:19.418051    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:19.418057    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:19.418072    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:19.418097    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:19.418111    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:19.418120    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:19.418129    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:19.418148    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:19.418160    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:19.418168    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:19.418177    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:19.418183    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:19.418189    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:19.418194    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:19.418207    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:19.418215    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:19.418223    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:21.418795    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 2
	I0906 11:29:21.418809    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:21.418939    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:21.419762    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:21.419839    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:21.419849    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:21.419858    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:21.419868    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:21.419876    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:21.419882    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:21.419889    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:21.419895    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:21.419901    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:21.419909    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:21.419923    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:21.419929    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:21.419943    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:21.419966    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:21.419973    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:21.419981    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:21.419989    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:21.419996    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:21.420003    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:21.420012    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:23.420726    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 3
	I0906 11:29:23.420742    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:23.420820    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:23.421608    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:23.421632    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:23.421640    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:23.421648    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:23.421656    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:23.421669    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:23.421683    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:23.421692    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:23.421700    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:23.421712    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:23.421718    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:23.421724    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:23.421733    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:23.421745    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:23.421756    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:23.421769    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:23.421792    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:23.421799    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:23.421808    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:23.421815    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:23.421822    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:23.722898    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 11:29:23.722927    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 11:29:23.722937    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 11:29:23.741397    8455 main.go:141] libmachine: (addons-565000) DBG | 2024/09/06 11:29:23 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 11:29:25.421888    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 4
	I0906 11:29:25.421902    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:25.421987    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:25.422765    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:25.422824    8455 main.go:141] libmachine: (addons-565000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0906 11:29:25.422837    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 11:29:25.422856    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 11:29:25.422863    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 11:29:25.422869    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 11:29:25.422877    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 11:29:25.422885    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 11:29:25.422891    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 11:29:25.422897    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 11:29:25.422903    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 11:29:25.422915    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 11:29:25.422923    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 11:29:25.422929    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 11:29:25.422937    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 11:29:25.422944    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 11:29:25.422951    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 11:29:25.422957    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 11:29:25.422966    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 11:29:25.422972    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 11:29:25.422982    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 11:29:27.423299    8455 main.go:141] libmachine: (addons-565000) DBG | Attempt 5
	I0906 11:29:27.423330    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:27.423401    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:27.424840    8455 main.go:141] libmachine: (addons-565000) DBG | Searching for ae:ba:57:3d:b2:90 in /var/db/dhcpd_leases ...
	I0906 11:29:27.424957    8455 main.go:141] libmachine: (addons-565000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I0906 11:29:27.424973    8455 main.go:141] libmachine: (addons-565000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 11:29:27.425004    8455 main.go:141] libmachine: (addons-565000) DBG | Found match: ae:ba:57:3d:b2:90
	I0906 11:29:27.425015    8455 main.go:141] libmachine: (addons-565000) DBG | IP: 192.169.0.21
	I0906 11:29:27.425068    8455 main.go:141] libmachine: (addons-565000) Calling .GetConfigRaw
	I0906 11:29:27.425921    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:27.426067    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:27.426198    8455 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 11:29:27.426215    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:27.426358    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:27.426421    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:27.427450    8455 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 11:29:27.427498    8455 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 11:29:27.427504    8455 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 11:29:27.427509    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:27.427647    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:27.427767    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:27.427907    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:27.428026    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:27.428618    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:27.428776    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:27.428783    8455 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 11:29:28.485491    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:29:28.485504    8455 main.go:141] libmachine: Detecting the provisioner...
	I0906 11:29:28.485510    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.485638    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.485732    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.485825    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.485921    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.486063    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:28.486203    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:28.486211    8455 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 11:29:28.541039    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 11:29:28.541082    8455 main.go:141] libmachine: found compatible host: buildroot
	I0906 11:29:28.541087    8455 main.go:141] libmachine: Provisioning with buildroot...
	I0906 11:29:28.541093    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:28.541225    8455 buildroot.go:166] provisioning hostname "addons-565000"
	I0906 11:29:28.541236    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:28.541348    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.541440    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.541519    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.541619    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.541714    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.541833    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:28.541981    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:28.541990    8455 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-565000 && echo "addons-565000" | sudo tee /etc/hostname
	I0906 11:29:28.605687    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-565000
	
	I0906 11:29:28.605704    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.605851    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.605953    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.606045    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.606130    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.606273    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:28.606426    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:28.606438    8455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-565000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-565000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-565000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 11:29:28.668881    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 11:29:28.668904    8455 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 11:29:28.668920    8455 buildroot.go:174] setting up certificates
	I0906 11:29:28.668931    8455 provision.go:84] configureAuth start
	I0906 11:29:28.668938    8455 main.go:141] libmachine: (addons-565000) Calling .GetMachineName
	I0906 11:29:28.669069    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:28.669154    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.669232    8455 provision.go:143] copyHostCerts
	I0906 11:29:28.669320    8455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 11:29:28.669571    8455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 11:29:28.669745    8455 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 11:29:28.669880    8455 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.addons-565000 san=[127.0.0.1 192.169.0.21 addons-565000 localhost minikube]
	I0906 11:29:28.989037    8455 provision.go:177] copyRemoteCerts
	I0906 11:29:28.989099    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 11:29:28.989116    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:28.989268    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:28.989372    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:28.989487    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:28.989592    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:29.023416    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 11:29:29.043644    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 11:29:29.064209    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 11:29:29.084761    8455 provision.go:87] duration metric: took 415.817386ms to configureAuth
	I0906 11:29:29.084776    8455 buildroot.go:189] setting minikube options for container-runtime
	I0906 11:29:29.084909    8455 config.go:182] Loaded profile config "addons-565000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:29.084925    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:29.085066    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:29.085161    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:29.085288    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.085426    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.085537    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:29.085695    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:29.085860    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:29.085868    8455 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 11:29:29.142353    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 11:29:29.142368    8455 buildroot.go:70] root file system type: tmpfs
	I0906 11:29:29.142441    8455 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 11:29:29.142454    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:29.142582    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:29.142678    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.142766    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.142860    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:29.142978    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:29.143115    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:29.143161    8455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 11:29:29.208562    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 11:29:29.208589    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:29.208722    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:29.208806    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.208884    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:29.208971    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:29.209114    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:29.209258    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:29.209271    8455 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 11:29:30.749304    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 11:29:30.749319    8455 main.go:141] libmachine: Checking connection to Docker...
	I0906 11:29:30.749326    8455 main.go:141] libmachine: (addons-565000) Calling .GetURL
	I0906 11:29:30.749466    8455 main.go:141] libmachine: Docker is up and running!
	I0906 11:29:30.749473    8455 main.go:141] libmachine: Reticulating splines...
	I0906 11:29:30.749478    8455 client.go:171] duration metric: took 14.441523854s to LocalClient.Create
	I0906 11:29:30.749497    8455 start.go:167] duration metric: took 14.441575146s to libmachine.API.Create "addons-565000"
	I0906 11:29:30.749505    8455 start.go:293] postStartSetup for "addons-565000" (driver="hyperkit")
	I0906 11:29:30.749512    8455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 11:29:30.749523    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.749678    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 11:29:30.749696    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.749791    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.749878    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.749968    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.750059    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:30.794342    8455 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 11:29:30.797497    8455 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 11:29:30.797511    8455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 11:29:30.797619    8455 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 11:29:30.797671    8455 start.go:296] duration metric: took 48.161941ms for postStartSetup
	I0906 11:29:30.797695    8455 main.go:141] libmachine: (addons-565000) Calling .GetConfigRaw
	I0906 11:29:30.798272    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:30.798426    8455 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/config.json ...
	I0906 11:29:30.798750    8455 start.go:128] duration metric: took 14.544266633s to createHost
	I0906 11:29:30.798763    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.798855    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.798951    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.799052    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.799145    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.799260    8455 main.go:141] libmachine: Using SSH client type: native
	I0906 11:29:30.799382    8455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x92fcea0] 0x92ffc00 <nil>  [] 0s} 192.169.0.21 22 <nil> <nil>}
	I0906 11:29:30.799389    8455 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 11:29:30.859504    8455 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647369.826213571
	
	I0906 11:29:30.859517    8455 fix.go:216] guest clock: 1725647369.826213571
	I0906 11:29:30.859522    8455 fix.go:229] Guest: 2024-09-06 11:29:29.826213571 -0700 PDT Remote: 2024-09-06 11:29:30.798758 -0700 PDT m=+14.950664910 (delta=-972.544429ms)
	I0906 11:29:30.859543    8455 fix.go:200] guest clock delta is within tolerance: -972.544429ms
	I0906 11:29:30.859547    8455 start.go:83] releasing machines lock for "addons-565000", held for 14.605157821s
	I0906 11:29:30.859567    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.859695    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:30.859777    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.860039    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.860136    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:30.860225    8455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 11:29:30.860254    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.860316    8455 ssh_runner.go:195] Run: cat /version.json
	I0906 11:29:30.860329    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:30.860342    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.860440    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:30.860445    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.860531    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.860555    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:30.860631    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:30.860647    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:30.860731    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:30.949966    8455 ssh_runner.go:195] Run: systemctl --version
	I0906 11:29:30.955203    8455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 11:29:30.959375    8455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 11:29:30.959418    8455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 11:29:30.971757    8455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 11:29:30.971780    8455 start.go:495] detecting cgroup driver to use...
	I0906 11:29:30.971894    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:29:30.986977    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 11:29:30.996071    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 11:29:31.005096    8455 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 11:29:31.005145    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 11:29:31.014136    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:29:31.023055    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 11:29:31.032062    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 11:29:31.041022    8455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 11:29:31.050308    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 11:29:31.059248    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 11:29:31.068176    8455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 11:29:31.077411    8455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 11:29:31.085555    8455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 11:29:31.093479    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:31.201145    8455 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 11:29:31.222253    8455 start.go:495] detecting cgroup driver to use...
	I0906 11:29:31.222341    8455 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 11:29:31.237591    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:29:31.253485    8455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 11:29:31.268036    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 11:29:31.278338    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:29:31.288358    8455 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 11:29:31.309700    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 11:29:31.320206    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 11:29:31.335018    8455 ssh_runner.go:195] Run: which cri-dockerd
	I0906 11:29:31.337800    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 11:29:31.345095    8455 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 11:29:31.358404    8455 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 11:29:31.456884    8455 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 11:29:31.564139    8455 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 11:29:31.564212    8455 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 11:29:31.578809    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:31.694306    8455 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:29:33.995914    8455 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301595395s)
	I0906 11:29:33.995975    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 11:29:34.007272    8455 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 11:29:34.021082    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:29:34.032208    8455 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 11:29:34.139494    8455 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 11:29:34.248209    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:34.362394    8455 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 11:29:34.377274    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 11:29:34.388527    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:34.483945    8455 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 11:29:34.551558    8455 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 11:29:34.551679    8455 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 11:29:34.556036    8455 start.go:563] Will wait 60s for crictl version
	I0906 11:29:34.556089    8455 ssh_runner.go:195] Run: which crictl
	I0906 11:29:34.559101    8455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 11:29:34.585938    8455 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 11:29:34.586008    8455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:29:34.603332    8455 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 11:29:34.640136    8455 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 11:29:34.640230    8455 main.go:141] libmachine: (addons-565000) Calling .GetIP
	I0906 11:29:34.640627    8455 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 11:29:34.645004    8455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 11:29:34.655288    8455 kubeadm.go:883] updating cluster {Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 11:29:34.655356    8455 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:34.655414    8455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:29:34.667257    8455 docker.go:685] Got preloaded images: 
	I0906 11:29:34.667268    8455 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0906 11:29:34.667320    8455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 11:29:34.675463    8455 ssh_runner.go:195] Run: which lz4
	I0906 11:29:34.678431    8455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 11:29:34.681450    8455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 11:29:34.681468    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342554258 bytes)
	I0906 11:29:35.623019    8455 docker.go:649] duration metric: took 944.631008ms to copy over tarball
	I0906 11:29:35.623081    8455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 11:29:38.069500    8455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.446408457s)
	I0906 11:29:38.069514    8455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 11:29:38.095973    8455 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0906 11:29:38.105513    8455 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0906 11:29:38.119308    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:38.219825    8455 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 11:29:40.635437    8455 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.415599665s)
	I0906 11:29:40.635531    8455 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 11:29:40.656401    8455 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 11:29:40.656422    8455 cache_images.go:84] Images are preloaded, skipping loading
	I0906 11:29:40.656434    8455 kubeadm.go:934] updating node { 192.169.0.21 8443 v1.31.0 docker true true} ...
	I0906 11:29:40.656522    8455 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-565000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 11:29:40.656588    8455 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 11:29:40.695005    8455 cni.go:84] Creating CNI manager for ""
	I0906 11:29:40.695025    8455 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:40.695036    8455 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 11:29:40.695050    8455 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.21 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-565000 NodeName:addons-565000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 11:29:40.695156    8455 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-565000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 11:29:40.695220    8455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 11:29:40.703707    8455 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 11:29:40.703756    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 11:29:40.712181    8455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 11:29:40.725977    8455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 11:29:40.739417    8455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0906 11:29:40.752976    8455 ssh_runner.go:195] Run: grep 192.169.0.21	control-plane.minikube.internal$ /etc/hosts
	I0906 11:29:40.756627    8455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 11:29:40.767138    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:40.874555    8455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:29:40.890663    8455 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000 for IP: 192.169.0.21
	I0906 11:29:40.890677    8455 certs.go:194] generating shared ca certs ...
	I0906 11:29:40.890688    8455 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:40.914930    8455 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 11:29:41.039754    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt ...
	I0906 11:29:41.039769    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt: {Name:mk0d4b22561f1ead1381b05661aa88ebed63b123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.040121    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key ...
	I0906 11:29:41.040129    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key: {Name:mk31784e271a5fc246284ae6ec9ced9f17941bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.040332    8455 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 11:29:41.169434    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt ...
	I0906 11:29:41.169449    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt: {Name:mkaee423f73427fa66087c4472bcd548d6c5d062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.169752    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key ...
	I0906 11:29:41.169760    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key: {Name:mk161526db6120d6dbaaf8eaf1186e0c40edb514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.169969    8455 certs.go:256] generating profile certs ...
	I0906 11:29:41.170022    8455 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.key
	I0906 11:29:41.170035    8455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt with IP's: []
	I0906 11:29:41.239632    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt ...
	I0906 11:29:41.239650    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: {Name:mke2a26b02855371d7edc15c55e24e65da8b67c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.239958    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.key ...
	I0906 11:29:41.239967    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.key: {Name:mk3c075150ed2df82f62269c4bd954c515fa0583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.240192    8455 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca
	I0906 11:29:41.240215    8455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.21]
	I0906 11:29:41.336445    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca ...
	I0906 11:29:41.336462    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca: {Name:mk91b2ee67c89c47e1d49f9bb6d42fbfd0bfdd5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.336808    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca ...
	I0906 11:29:41.336818    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca: {Name:mkc44f6221b70f1996427e602350a1814ac148ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.337041    8455 certs.go:381] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt.b816bfca -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt
	I0906 11:29:41.337269    8455 certs.go:385] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key.b816bfca -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key
	I0906 11:29:41.337441    8455 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key
	I0906 11:29:41.337462    8455 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt with IP's: []
	I0906 11:29:41.454677    8455 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt ...
	I0906 11:29:41.454692    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt: {Name:mk79c9ec2cf3aec157dc0c69fc61ebe245f21c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.455011    8455 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key ...
	I0906 11:29:41.455026    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key: {Name:mk653cc99373bf1ec78acbf11c780a3dc4359bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:41.455471    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 11:29:41.455522    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 11:29:41.455554    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 11:29:41.455584    8455 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 11:29:41.456140    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 11:29:41.476588    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 11:29:41.495712    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 11:29:41.515425    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 11:29:41.544278    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 11:29:41.572544    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 11:29:41.592257    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 11:29:41.612143    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 11:29:41.631437    8455 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 11:29:41.651353    8455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 11:29:41.665525    8455 ssh_runner.go:195] Run: openssl version
	I0906 11:29:41.670483    8455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 11:29:41.680279    8455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:41.683774    8455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6  2024 /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:41.683809    8455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 11:29:41.688035    8455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 11:29:41.697222    8455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 11:29:41.700213    8455 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 11:29:41.700259    8455 kubeadm.go:392] StartCluster: {Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-565000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:29:41.700348    8455 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 11:29:41.712942    8455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 11:29:41.721902    8455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 11:29:41.730279    8455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 11:29:41.738350    8455 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 11:29:41.738358    8455 kubeadm.go:157] found existing configuration files:
	
	I0906 11:29:41.738395    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 11:29:41.746116    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 11:29:41.746158    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 11:29:41.754029    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 11:29:41.762605    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 11:29:41.762660    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 11:29:41.770800    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 11:29:41.778477    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 11:29:41.778520    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 11:29:41.786466    8455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 11:29:41.794132    8455 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 11:29:41.794169    8455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 11:29:41.801983    8455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 11:29:41.835594    8455 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 11:29:41.835672    8455 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 11:29:41.913800    8455 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 11:29:41.913897    8455 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 11:29:41.913973    8455 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 11:29:41.922430    8455 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 11:29:41.931910    8455 out.go:235]   - Generating certificates and keys ...
	I0906 11:29:41.932004    8455 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 11:29:41.932061    8455 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 11:29:42.134277    8455 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 11:29:42.212129    8455 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 11:29:42.460420    8455 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 11:29:42.954755    8455 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 11:29:43.251036    8455 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 11:29:43.251158    8455 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-565000 localhost] and IPs [192.169.0.21 127.0.0.1 ::1]
	I0906 11:29:43.328007    8455 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 11:29:43.328148    8455 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-565000 localhost] and IPs [192.169.0.21 127.0.0.1 ::1]
	I0906 11:29:43.474236    8455 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 11:29:43.705955    8455 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 11:29:44.021455    8455 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 11:29:44.021607    8455 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 11:29:44.247495    8455 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 11:29:44.516381    8455 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 11:29:44.975732    8455 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 11:29:45.075378    8455 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 11:29:45.126391    8455 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 11:29:45.126858    8455 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 11:29:45.128655    8455 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 11:29:45.150072    8455 out.go:235]   - Booting up control plane ...
	I0906 11:29:45.150146    8455 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 11:29:45.150211    8455 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 11:29:45.150267    8455 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 11:29:45.150350    8455 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 11:29:45.151014    8455 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 11:29:45.151110    8455 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 11:29:45.255434    8455 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 11:29:45.255548    8455 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 11:29:46.255712    8455 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001186147s
	I0906 11:29:46.255799    8455 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 11:29:50.754321    8455 kubeadm.go:310] [api-check] The API server is healthy after 4.50183121s
	I0906 11:29:50.763214    8455 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 11:29:50.774769    8455 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 11:29:50.786473    8455 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 11:29:50.786624    8455 kubeadm.go:310] [mark-control-plane] Marking the node addons-565000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 11:29:50.793440    8455 kubeadm.go:310] [bootstrap-token] Using token: v414ok.q4r5c1ktsisywdo3
	I0906 11:29:50.831636    8455 out.go:235]   - Configuring RBAC rules ...
	I0906 11:29:50.831770    8455 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 11:29:50.875990    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 11:29:50.880177    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 11:29:50.882314    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 11:29:50.884187    8455 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 11:29:50.921970    8455 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 11:29:51.164312    8455 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 11:29:51.583591    8455 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 11:29:52.159842    8455 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 11:29:52.160656    8455 kubeadm.go:310] 
	I0906 11:29:52.160707    8455 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 11:29:52.160712    8455 kubeadm.go:310] 
	I0906 11:29:52.160791    8455 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 11:29:52.160799    8455 kubeadm.go:310] 
	I0906 11:29:52.160829    8455 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 11:29:52.160885    8455 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 11:29:52.160928    8455 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 11:29:52.160940    8455 kubeadm.go:310] 
	I0906 11:29:52.160984    8455 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 11:29:52.160989    8455 kubeadm.go:310] 
	I0906 11:29:52.161032    8455 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 11:29:52.161038    8455 kubeadm.go:310] 
	I0906 11:29:52.161086    8455 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 11:29:52.161158    8455 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 11:29:52.161212    8455 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 11:29:52.161223    8455 kubeadm.go:310] 
	I0906 11:29:52.161299    8455 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 11:29:52.161367    8455 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 11:29:52.161375    8455 kubeadm.go:310] 
	I0906 11:29:52.161443    8455 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v414ok.q4r5c1ktsisywdo3 \
	I0906 11:29:52.161526    8455 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:45446d42e4448e7605f26f9a5cfb01778d08c7c0d429a2f5a46c753d1be13709 \
	I0906 11:29:52.161542    8455 kubeadm.go:310] 	--control-plane 
	I0906 11:29:52.161550    8455 kubeadm.go:310] 
	I0906 11:29:52.161626    8455 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 11:29:52.161635    8455 kubeadm.go:310] 
	I0906 11:29:52.161703    8455 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v414ok.q4r5c1ktsisywdo3 \
	I0906 11:29:52.161782    8455 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:45446d42e4448e7605f26f9a5cfb01778d08c7c0d429a2f5a46c753d1be13709 
	I0906 11:29:52.162086    8455 kubeadm.go:310] W0906 18:29:40.808614    1575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 11:29:52.162318    8455 kubeadm.go:310] W0906 18:29:40.809137    1575 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 11:29:52.162407    8455 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 11:29:52.162427    8455 cni.go:84] Creating CNI manager for ""
	I0906 11:29:52.162440    8455 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:52.220101    8455 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 11:29:52.241095    8455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 11:29:52.249649    8455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 11:29:52.265503    8455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 11:29:52.265585    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:52.265594    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-565000 minikube.k8s.io/updated_at=2024_09_06T11_29_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-565000 minikube.k8s.io/primary=true
	I0906 11:29:52.292049    8455 ops.go:34] apiserver oom_adj: -16
	I0906 11:29:52.360649    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:52.861465    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:53.360705    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:53.860711    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:54.360781    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:54.861517    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:55.360895    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:55.860733    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:56.360803    8455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 11:29:56.420219    8455 kubeadm.go:1113] duration metric: took 4.154703743s to wait for elevateKubeSystemPrivileges
	I0906 11:29:56.420238    8455 kubeadm.go:394] duration metric: took 14.720029334s to StartCluster
	I0906 11:29:56.420254    8455 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:56.420424    8455 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:29:56.420653    8455 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:56.420911    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 11:29:56.420937    8455 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.21 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 11:29:56.420963    8455 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 11:29:56.421015    8455 addons.go:69] Setting yakd=true in profile "addons-565000"
	I0906 11:29:56.421030    8455 addons.go:69] Setting inspektor-gadget=true in profile "addons-565000"
	I0906 11:29:56.421041    8455 addons.go:234] Setting addon yakd=true in "addons-565000"
	I0906 11:29:56.421037    8455 addons.go:69] Setting storage-provisioner=true in profile "addons-565000"
	I0906 11:29:56.421049    8455 addons.go:69] Setting volcano=true in profile "addons-565000"
	I0906 11:29:56.421074    8455 addons.go:69] Setting gcp-auth=true in profile "addons-565000"
	I0906 11:29:56.421081    8455 addons.go:69] Setting ingress-dns=true in profile "addons-565000"
	I0906 11:29:56.421089    8455 addons.go:69] Setting metrics-server=true in profile "addons-565000"
	I0906 11:29:56.421107    8455 addons.go:69] Setting cloud-spanner=true in profile "addons-565000"
	I0906 11:29:56.421108    8455 addons.go:69] Setting volumesnapshots=true in profile "addons-565000"
	I0906 11:29:56.421095    8455 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-565000"
	I0906 11:29:56.421125    8455 addons.go:234] Setting addon metrics-server=true in "addons-565000"
	I0906 11:29:56.421124    8455 addons.go:234] Setting addon volcano=true in "addons-565000"
	I0906 11:29:56.421136    8455 addons.go:234] Setting addon cloud-spanner=true in "addons-565000"
	I0906 11:29:56.421150    8455 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-565000"
	I0906 11:29:56.421092    8455 addons.go:234] Setting addon storage-provisioner=true in "addons-565000"
	I0906 11:29:56.421183    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421140    8455 addons.go:234] Setting addon volumesnapshots=true in "addons-565000"
	I0906 11:29:56.421186    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421223    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421072    8455 addons.go:69] Setting helm-tiller=true in profile "addons-565000"
	I0906 11:29:56.421259    8455 addons.go:234] Setting addon helm-tiller=true in "addons-565000"
	I0906 11:29:56.421078    8455 addons.go:69] Setting ingress=true in profile "addons-565000"
	I0906 11:29:56.421283    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421288    8455 addons.go:234] Setting addon ingress=true in "addons-565000"
	I0906 11:29:56.421061    8455 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-565000"
	I0906 11:29:56.421315    8455 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-565000"
	I0906 11:29:56.421318    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421101    8455 addons.go:234] Setting addon ingress-dns=true in "addons-565000"
	I0906 11:29:56.421333    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421377    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421065    8455 addons.go:234] Setting addon inspektor-gadget=true in "addons-565000"
	I0906 11:29:56.421442    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421561    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421580    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421588    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421599    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421604    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421620    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421621    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421640    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421665    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421675    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421686    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421687    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421701    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421101    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421103    8455 mustload.go:65] Loading cluster: addons-565000
	I0906 11:29:56.421745    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421098    8455 addons.go:69] Setting registry=true in profile "addons-565000"
	I0906 11:29:56.421752    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421769    8455 addons.go:234] Setting addon registry=true in "addons-565000"
	I0906 11:29:56.421768    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.421774    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421786    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421798    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.421136    8455 addons.go:69] Setting default-storageclass=true in profile "addons-565000"
	I0906 11:29:56.421907    8455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-565000"
	I0906 11:29:56.421147    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421125    8455 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-565000"
	I0906 11:29:56.421150    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.421106    8455 config.go:182] Loaded profile config "addons-565000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:56.423237    8455 config.go:182] Loaded profile config "addons-565000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:29:56.423341    8455 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-565000"
	I0906 11:29:56.423482    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.423762    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425212    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.425267    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425268    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425266    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425183    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425774    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.425852    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.425912    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.425918    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.426917    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.427026    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.427180    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.427278    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.436533    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53815
	I0906 11:29:56.440847    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.440997    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53817
	I0906 11:29:56.441095    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53818
	I0906 11:29:56.446302    8455 out.go:177] * Verifying Kubernetes components...
	I0906 11:29:56.446722    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53821
	I0906 11:29:56.446744    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53822
	I0906 11:29:56.446749    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.446754    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.446766    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53823
	I0906 11:29:56.447675    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.451778    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53824
	I0906 11:29:56.463442    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.452528    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53825
	I0906 11:29:56.456155    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53826
	I0906 11:29:56.456184    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53828
	I0906 11:29:56.456204    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53827
	I0906 11:29:56.458996    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53829
	I0906 11:29:56.460005    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53830
	I0906 11:29:56.464034    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464038    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.460211    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53831
	I0906 11:29:56.462325    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53832
	I0906 11:29:56.464157    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464210    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464238    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.464277    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464277    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464306    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464327    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464345    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464335    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464489    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464586    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464672    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464797    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464800    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464801    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464830    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464840    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.464843    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464927    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.464956    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53845
	I0906 11:29:56.464966    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.464976    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465060    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465071    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465359    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.465412    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465445    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465522    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465531    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465537    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465546    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465592    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465600    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465609    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465672    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.465674    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465689    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465701    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.465726    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465734    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465739    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465742    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465752    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465753    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.465756    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.465770    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.467198    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.467946    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.468145    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.468189    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.469703    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.468329    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.469669    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470044    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470058    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470063    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.470071    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.470081    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.470101    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.470110    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.470135    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.470012    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.471587    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.471856    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.472158    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.472359    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.472285    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.472449    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.472588    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.472668    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.473592    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.473633    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.473665    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.474108    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.474253    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.474323    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.475200    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.475334    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.475320    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.475489    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.475579    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.475775    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.476017    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476133    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476204    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.476053    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476368    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476550    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476639    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476666    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.476707    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.476710    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.477228    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.477291    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.479695    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.481574    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.482596    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53847
	I0906 11:29:56.486165    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.486175    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53849
	I0906 11:29:56.489566    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.491994    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.492296    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.492390    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53851
	I0906 11:29:56.494937    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53852
	I0906 11:29:56.495375    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.497952    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.498199    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.498212    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53855
	I0906 11:29:56.498498    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.498545    8455 addons.go:234] Setting addon default-storageclass=true in "addons-565000"
	I0906 11:29:56.498655    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.498795    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.498812    8455 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-565000"
	I0906 11:29:56.498881    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.498953    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:29:56.503640    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.503655    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.503819    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53857
	I0906 11:29:56.503818    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53858
	I0906 11:29:56.503866    8455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 11:29:56.503881    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.503930    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.504693    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.505873    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.506092    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.506123    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.505841    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.506431    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53861
	I0906 11:29:56.506491    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.506582    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.506682    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.506719    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.506881    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.506957    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.507072    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.512514    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.512637    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.513745    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.514156    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.514236    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.514425    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.514677    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.514617    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53863
	I0906 11:29:56.514665    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.514745    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.514866    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.514769    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.514981    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.515008    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53864
	I0906 11:29:56.515075    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.515416    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.515405    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.515444    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.515449    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.515557    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.515575    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.528277    8455 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 11:29:56.518995    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.520950    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.521007    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53867
	I0906 11:29:56.521018    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.521058    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.521449    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.521847    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.521974    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.521998    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.522067    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.522909    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53868
	I0906 11:29:56.524440    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53869
	I0906 11:29:56.527741    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53870
	I0906 11:29:56.518367    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.528569    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53871
	I0906 11:29:56.528852    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.528988    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529134    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529139    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.529166    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.529227    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.529254    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.529500    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.529576    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.529659    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.549483    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.529690    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.549490    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.529752    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529768    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.529789    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.549590    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.529796    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.530590    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.530657    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.549721    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.530737    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.532082    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53877
	I0906 11:29:56.549180    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 11:29:56.549863    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.549977    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.586672    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.550052    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.623422    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.549539    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.550155    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.550293    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.550296    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.623527    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.550342    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.550486    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.550944    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.586321    8455 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 11:29:56.623628    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 11:29:56.623643    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.586834    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.586960    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.623188    8455 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 11:29:56.623197    8455 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 11:29:56.623504    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.623515    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.623770    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.623877    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.623909    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.623912    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.623934    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.623937    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.624060    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.625134    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.697573    8455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 11:29:56.697571    8455 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 11:29:56.697618    8455 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 11:29:56.698304    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.698666    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.698720    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.698924    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.700206    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.708965    8455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 11:29:56.709063    8455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 11:29:56.718376    8455 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 11:29:56.718543    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.718561    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.718708    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.718729    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.755320    8455 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 11:29:56.755573    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 11:29:56.755605    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.755619    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.755661    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 11:29:56.776651    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.755557    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.755857    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.756014    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.756001    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.756009    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.756009    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.776941    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.776939    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.756733    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.758418    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.776993    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.777037    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.765000    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53880
	I0906 11:29:56.777056    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.777060    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.777080    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.776388    8455 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 11:29:56.777160    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.777196    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.777342    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.777596    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:56.777605    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.778278    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.778315    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.797339    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:56.797806    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.797816    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.818188    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 11:29:56.818356    8455 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 11:29:56.818392    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.818190    8455 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 11:29:56.818456    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:29:56.818302    8455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:29:56.855556    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 11:29:56.818295    8455 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 11:29:56.855574    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.855585    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 11:29:56.818615    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.855619    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.818687    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.818879    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.855313    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 11:29:56.856276    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.856311    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.863337    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 11:29:56.864707    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53885
	I0906 11:29:56.876313    8455 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 11:29:56.876711    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.882651    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 11:29:56.897235    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 11:29:56.897414    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.904823    8455 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 11:29:56.934459    8455 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 11:29:56.934857    8455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 11:29:56.934897    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:56.935030    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.935094    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.935179    8455 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 11:29:56.935275    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:56.935345    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.935396    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.935459    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.935675    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.935773    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.935830    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:56.936040    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.936142    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.936304    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:56.936370    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:29:56.936455    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.936577    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:56.936747    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.936872    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.937001    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:29:56.937028    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:29:56.937369    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:29:56.937527    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:29:56.937672    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:29:56.937783    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:29:56.938306    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.938981    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:29:56.987318    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 11:29:56.987708    8455 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 11:29:57.008385    8455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 11:29:57.008214    8455 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 11:29:57.008406    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.008410    8455 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 11:29:57.008226    8455 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 11:29:57.008429    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.008243    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:29:57.008227    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 11:29:57.008573    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.008606    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.009811    8455 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 11:29:57.029259    8455 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 11:29:57.050499    8455 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 11:29:57.050523    8455 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 11:29:57.087386    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.050662    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.050678    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.083696    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 11:29:57.087101    8455 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 11:29:57.087150    8455 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 11:29:57.124765    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 11:29:57.087568    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.087595    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.087597    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.116233    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 11:29:57.120011    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 11:29:57.124472    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 11:29:57.124822    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.145588    8455 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 11:29:57.145602    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 11:29:57.124858    8455 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 11:29:57.145617    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.125025    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.125023    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.125024    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.125026    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.145833    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.145859    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.145863    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.150688    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 11:29:57.166629    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.166637    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.166660    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.175366    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 11:29:57.187716    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 11:29:57.187816    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.187852    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.224402    8455 out.go:177]   - Using image docker.io/busybox:stable
	I0906 11:29:57.224733    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.238083    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 11:29:57.245461    8455 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 11:29:57.245234    8455 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 11:29:57.266380    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 11:29:57.287436    8455 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 11:29:57.287524    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 11:29:57.287545    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.287691    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.287787    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.287875    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.287976    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.297104    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 11:29:57.297115    8455 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 11:29:57.315582    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 11:29:57.315593    8455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 11:29:57.329712    8455 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 11:29:57.329724    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 11:29:57.329739    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.329868    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.329941    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.330039    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.330119    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.342994    8455 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 11:29:57.343005    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 11:29:57.387084    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 11:29:57.391094    8455 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 11:29:57.391106    8455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 11:29:57.405605    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 11:29:57.423540    8455 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 11:29:57.423552    8455 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 11:29:57.423931    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 11:29:57.445048    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 11:29:57.483904    8455 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 11:29:57.483918    8455 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 11:29:57.484726    8455 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 11:29:57.484735    8455 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 11:29:57.486807    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 11:29:57.492154    8455 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 11:29:57.492167    8455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 11:29:57.545077    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 11:29:57.559273    8455 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0906 11:29:57.559897    8455 node_ready.go:35] waiting up to 6m0s for node "addons-565000" to be "Ready" ...
	I0906 11:29:57.567128    8455 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 11:29:57.567140    8455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 11:29:57.569715    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 11:29:57.584195    8455 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 11:29:57.584208    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 11:29:57.588706    8455 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 11:29:57.588718    8455 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 11:29:57.603058    8455 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 11:29:57.624035    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 11:29:57.624055    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 11:29:57.624078    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:29:57.624240    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:29:57.624383    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:29:57.624486    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:29:57.624599    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:29:57.638014    8455 node_ready.go:49] node "addons-565000" has status "Ready":"True"
	I0906 11:29:57.638028    8455 node_ready.go:38] duration metric: took 78.116304ms for node "addons-565000" to be "Ready" ...
	I0906 11:29:57.638037    8455 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	W0906 11:29:57.654349    8455 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-565000" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0906 11:29:57.654362    8455 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0906 11:29:57.668286    8455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace to be "Ready" ...
	I0906 11:29:57.693200    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 11:29:57.697036    8455 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 11:29:57.697049    8455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 11:29:57.795571    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 11:29:57.814012    8455 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 11:29:57.814024    8455 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 11:29:57.828048    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 11:29:57.841022    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 11:29:57.944816    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 11:29:57.944829    8455 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 11:29:58.136317    8455 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 11:29:58.136329    8455 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 11:29:58.406130    8455 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:58.406143    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 11:29:58.783795    8455 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 11:29:58.783808    8455 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 11:29:58.850827    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 11:29:58.850841    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 11:29:58.966562    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:29:59.027963    8455 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 11:29:59.027976    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 11:29:59.150576    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 11:29:59.150591    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 11:29:59.152813    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.217849688s)
	I0906 11:29:59.152838    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.152845    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.152955    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.218115416s)
	I0906 11:29:59.152983    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.152990    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.153002    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.153000    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153022    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.153042    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.153051    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.153137    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153142    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.153146    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.153172    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.153179    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.153257    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153266    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.153268    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.153350    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.153362    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.255176    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 11:29:59.255190    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 11:29:59.267630    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 11:29:59.341072    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 11:29:59.341086    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 11:29:59.566332    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.478873298s)
	I0906 11:29:59.566359    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.566366    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.566543    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.566543    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:29:59.566552    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.566559    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:29:59.566563    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:29:59.566706    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:29:59.566717    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:29:59.605212    8455 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 11:29:59.605224    8455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 11:29:59.672786    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:29:59.963492    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 11:29:59.963505    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 11:30:00.324939    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 11:30:00.324958    8455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 11:30:00.591312    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.46645806s)
	I0906 11:30:00.591344    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.591356    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.591527    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.591536    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.591543    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.591547    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.591569    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.591700    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.591716    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.591726    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.622405    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 11:30:00.622418    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 11:30:00.775030    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.608503239s)
	I0906 11:30:00.775066    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775078    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775086    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.369460747s)
	I0906 11:30:00.775114    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775139    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775369    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775381    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.775409    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.775410    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775427    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775446    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775467    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.775484    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.775495    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.775656    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.775685    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775699    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.775730    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.775764    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:00.785954    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:00.785974    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:00.786133    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:00.786184    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:00.786196    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:01.010438    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 11:30:01.010451    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 11:30:01.241014    8455 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 11:30:01.241031    8455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 11:30:01.544389    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 11:30:01.756601    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:03.776050    8455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 11:30:03.776072    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:30:03.776222    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:30:03.776327    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:30:03.776425    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:30:03.776520    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:30:04.056811    8455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 11:30:04.181174    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:04.222147    8455 addons.go:234] Setting addon gcp-auth=true in "addons-565000"
	I0906 11:30:04.222180    8455 host.go:66] Checking if "addons-565000" exists ...
	I0906 11:30:04.222463    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:30:04.222480    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:30:04.231735    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53901
	I0906 11:30:04.232095    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:30:04.232496    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:30:04.232512    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:30:04.232739    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:30:04.233145    8455 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:30:04.233163    8455 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:30:04.242121    8455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53903
	I0906 11:30:04.242462    8455 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:30:04.242817    8455 main.go:141] libmachine: Using API Version  1
	I0906 11:30:04.242840    8455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:30:04.243079    8455 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:30:04.243208    8455 main.go:141] libmachine: (addons-565000) Calling .GetState
	I0906 11:30:04.243293    8455 main.go:141] libmachine: (addons-565000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:30:04.243381    8455 main.go:141] libmachine: (addons-565000) DBG | hyperkit pid from json: 8468
	I0906 11:30:04.244374    8455 main.go:141] libmachine: (addons-565000) Calling .DriverName
	I0906 11:30:04.244540    8455 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 11:30:04.244552    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHHostname
	I0906 11:30:04.244639    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHPort
	I0906 11:30:04.244719    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHKeyPath
	I0906 11:30:04.244834    8455 main.go:141] libmachine: (addons-565000) Calling .GetSSHUsername
	I0906 11:30:04.244908    8455 sshutil.go:53] new ssh client: &{IP:192.169.0.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/addons-565000/id_rsa Username:docker}
	I0906 11:30:05.084263    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.66033458s)
	I0906 11:30:05.084298    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084305    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084321    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.514601292s)
	I0906 11:30:05.084350    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084362    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084399    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.391204303s)
	I0906 11:30:05.084422    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084432    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084499    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.288920476s)
	I0906 11:30:05.084532    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084545    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.084547    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084548    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084592    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084591    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.084601    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084606    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084610    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084616    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084620    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084628    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084556    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.256506457s)
	I0906 11:30:05.084653    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084663    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084668    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084683    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084689    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.084693    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084747    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084802    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.084810    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.084818    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.084825    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.084825    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085009    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085039    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085050    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085070    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.085080    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.085083    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085104    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085111    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085121    8455 addons.go:475] Verifying addon ingress=true in "addons-565000"
	I0906 11:30:05.085126    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085362    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085389    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085361    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085413    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085419    8455 addons.go:475] Verifying addon metrics-server=true in "addons-565000"
	I0906 11:30:05.085472    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085482    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085594    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.085629    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.085637    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.085647    8455 addons.go:475] Verifying addon registry=true in "addons-565000"
	I0906 11:30:05.085229    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.086132    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.125960    8455 out.go:177] * Verifying ingress addon...
	I0906 11:30:05.168796    8455 out.go:177] * Verifying registry addon...
	I0906 11:30:05.209883    8455 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-565000 service yakd-dashboard -n yakd-dashboard
	
	I0906 11:30:05.231457    8455 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 11:30:05.268300    8455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 11:30:05.314015    8455 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 11:30:05.314040    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:05.314422    8455 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 11:30:05.314431    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:05.334518    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:05.334532    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:05.334682    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:05.334689    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:05.334689    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:05.752240    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:05.783002    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.207614    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:06.265464    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:06.385025    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.744791    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:06.845743    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:06.935273    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.094254469s)
	I0906 11:30:06.935304    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935312    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935329    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.968766848s)
	W0906 11:30:06.935351    8455 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 11:30:06.935380    8455 retry.go:31] will retry after 268.697102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 11:30:06.935393    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.667764991s)
	I0906 11:30:06.935414    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935426    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935489    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:06.935497    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935506    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:06.935517    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935524    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935574    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935581    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:06.935589    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:06.935595    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:06.935652    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935665    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:06.935683    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:06.935748    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:06.935757    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:07.204733    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 11:30:07.252648    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:07.385674    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:07.574377    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.029977227s)
	I0906 11:30:07.574403    8455 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.329860607s)
	I0906 11:30:07.574405    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:07.574430    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:07.574595    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:07.574596    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:07.574607    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:07.574617    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:07.574624    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:07.574761    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:07.574773    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:07.574774    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:07.574792    8455 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-565000"
	I0906 11:30:07.601380    8455 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 11:30:07.658168    8455 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 11:30:07.700267    8455 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 11:30:07.700897    8455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 11:30:07.721151    8455 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 11:30:07.721166    8455 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 11:30:07.756600    8455 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 11:30:07.756613    8455 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 11:30:07.762365    8455 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 11:30:07.762377    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:07.785903    8455 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 11:30:07.785915    8455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 11:30:07.793816    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:07.793930    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:07.808109    8455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 11:30:08.204144    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:08.235373    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:08.270969    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:08.620171    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.415414624s)
	I0906 11:30:08.620199    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.620208    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.620386    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:08.620392    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.620401    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.620409    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.620415    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.620552    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:08.620571    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.620580    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.678805    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:08.709812    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:08.825512    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:08.829000    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:08.847023    8455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.038895786s)
	I0906 11:30:08.847048    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.847057    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.847252    8455 main.go:141] libmachine: (addons-565000) DBG | Closing plugin on server side
	I0906 11:30:08.847253    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.847269    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.847278    8455 main.go:141] libmachine: Making call to close driver server
	I0906 11:30:08.847283    8455 main.go:141] libmachine: (addons-565000) Calling .Close
	I0906 11:30:08.847402    8455 main.go:141] libmachine: Successfully made call to close driver server
	I0906 11:30:08.847412    8455 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 11:30:08.848332    8455 addons.go:475] Verifying addon gcp-auth=true in "addons-565000"
	I0906 11:30:08.874335    8455 out.go:177] * Verifying gcp-auth addon...
	I0906 11:30:08.931430    8455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 11:30:08.933534    8455 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 11:30:09.206156    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:09.233832    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:09.271889    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:09.705437    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:09.734949    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:09.805889    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:10.205862    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:10.233962    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:10.271951    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:10.704864    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:10.734864    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:10.771057    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:11.171641    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:11.203752    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:11.235696    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:11.270597    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:11.703861    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:11.734479    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:11.803604    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:12.205280    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:12.233961    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:12.271168    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:12.706218    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:12.735450    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:12.774856    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:13.172003    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:13.204890    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:13.234616    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:13.273404    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:13.704172    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:13.735028    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:13.770364    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:14.204988    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:14.233800    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:14.271674    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:14.703819    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:14.733896    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:14.770960    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:15.203928    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:15.233774    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:15.270433    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:15.672262    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:15.704340    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:15.734107    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:15.772110    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:16.204785    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:16.306886    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:16.307019    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:16.703600    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:16.734760    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:16.772040    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:17.204393    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:17.234575    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:17.273015    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:17.704365    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:17.734294    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:17.770680    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:18.172614    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:18.204611    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:18.234096    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:18.272183    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:18.704409    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:18.734073    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:18.770995    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:19.204012    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:19.234554    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:19.270456    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:19.704292    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:19.734326    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:19.770747    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:20.173772    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:20.204025    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:20.233976    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:20.271106    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:20.704645    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:20.734562    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:20.770462    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:21.204117    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:21.234104    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:21.270675    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:21.704037    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:21.734645    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:21.803534    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:22.204915    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.234721    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:22.271951    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:22.672648    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:22.704206    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:22.735832    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:22.770624    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:23.203657    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:23.234885    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:23.272603    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:23.705598    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:23.735416    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:23.771798    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:24.204841    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:24.233853    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:24.271588    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:24.673454    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:24.785153    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:24.785382    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:24.785727    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.204225    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.238348    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:25.271567    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:25.705677    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:25.734921    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:25.771626    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:26.203921    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:26.234233    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:26.271127    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:26.704385    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:26.734361    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:26.772578    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:27.172737    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:27.204532    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:27.234268    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:27.360774    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:27.705680    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:27.734652    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:27.773251    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:28.204118    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:28.234692    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:28.270814    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:28.704132    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:28.735332    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:28.771420    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:29.206138    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:29.234411    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:29.270328    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:29.673427    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:29.704527    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:29.734053    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:29.771106    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:30.203996    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:30.235508    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:30.335802    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:30.705206    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:30.735024    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:30.772031    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:31.203946    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:31.234238    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:31.270704    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:31.674698    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:31.704817    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:31.735357    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:31.770457    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:32.204274    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:32.234338    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:32.271635    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:32.703723    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:32.735067    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:32.771147    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:33.204515    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:33.235946    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:33.270477    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:33.704384    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:33.733984    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:33.770661    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:34.172308    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:34.205353    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:34.235316    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:34.272336    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:34.704555    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:34.735148    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:34.770398    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:35.204712    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:35.235536    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:35.270625    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:35.766294    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:35.766350    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:35.771534    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:36.173751    8455 pod_ready.go:103] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"False"
	I0906 11:30:36.207299    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:36.235859    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:36.273799    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 11:30:36.705316    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:36.735271    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:36.771004    8455 kapi.go:107] duration metric: took 31.502792788s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 11:30:37.205426    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:37.234825    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:37.672676    8455 pod_ready.go:93] pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.672688    8455 pod_ready.go:82] duration metric: took 40.004500142s for pod "coredns-6f6b679f8f-jjpz5" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.672695    8455 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-k8jth" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.675959    8455 pod_ready.go:93] pod "coredns-6f6b679f8f-k8jth" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.675969    8455 pod_ready.go:82] duration metric: took 3.268936ms for pod "coredns-6f6b679f8f-k8jth" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.675976    8455 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.679319    8455 pod_ready.go:93] pod "etcd-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.679328    8455 pod_ready.go:82] duration metric: took 3.348271ms for pod "etcd-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.679335    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.682816    8455 pod_ready.go:93] pod "kube-apiserver-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.682825    8455 pod_ready.go:82] duration metric: took 3.485903ms for pod "kube-apiserver-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.682831    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.685824    8455 pod_ready.go:93] pod "kube-controller-manager-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:37.685833    8455 pod_ready.go:82] duration metric: took 2.997205ms for pod "kube-controller-manager-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.685839    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ngbg9" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:37.703539    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:37.734759    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.073069    8455 pod_ready.go:93] pod "kube-proxy-ngbg9" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:38.073082    8455 pod_ready.go:82] duration metric: took 387.228788ms for pod "kube-proxy-ngbg9" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.073089    8455 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.204091    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:38.234507    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.470751    8455 pod_ready.go:93] pod "kube-scheduler-addons-565000" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:38.470764    8455 pod_ready.go:82] duration metric: took 397.672182ms for pod "kube-scheduler-addons-565000" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.470771    8455 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2d5x4" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.704619    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:38.735824    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:38.871317    8455 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-2d5x4" in "kube-system" namespace has status "Ready":"True"
	I0906 11:30:38.871329    8455 pod_ready.go:82] duration metric: took 400.552932ms for pod "nvidia-device-plugin-daemonset-2d5x4" in "kube-system" namespace to be "Ready" ...
	I0906 11:30:38.871334    8455 pod_ready.go:39] duration metric: took 41.233405742s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 11:30:38.871358    8455 api_server.go:52] waiting for apiserver process to appear ...
	I0906 11:30:38.871410    8455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:30:38.891002    8455 api_server.go:72] duration metric: took 42.470167316s to wait for apiserver process to appear ...
	I0906 11:30:38.891015    8455 api_server.go:88] waiting for apiserver healthz status ...
	I0906 11:30:38.891032    8455 api_server.go:253] Checking apiserver healthz at https://192.169.0.21:8443/healthz ...
	I0906 11:30:38.894959    8455 api_server.go:279] https://192.169.0.21:8443/healthz returned 200:
	ok
	I0906 11:30:38.895597    8455 api_server.go:141] control plane version: v1.31.0
	I0906 11:30:38.895607    8455 api_server.go:131] duration metric: took 4.587861ms to wait for apiserver health ...
	I0906 11:30:38.895612    8455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 11:30:39.076644    8455 system_pods.go:59] 19 kube-system pods found
	I0906 11:30:39.076664    8455 system_pods.go:61] "coredns-6f6b679f8f-jjpz5" [cb713a3d-2e0e-4205-9273-5b2a6393fe7e] Running
	I0906 11:30:39.076668    8455 system_pods.go:61] "coredns-6f6b679f8f-k8jth" [2bad9a21-c9b3-41db-ba12-73a2a482ea5f] Running
	I0906 11:30:39.076673    8455 system_pods.go:61] "csi-hostpath-attacher-0" [32c64849-b5bc-4889-b0bd-bff533458c95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 11:30:39.076679    8455 system_pods.go:61] "csi-hostpath-resizer-0" [35ca7619-3cef-4ad1-886a-73bfe39cfdc9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 11:30:39.076684    8455 system_pods.go:61] "csi-hostpathplugin-s2s7r" [abafc424-abbe-4002-9dea-b3e02be0fbf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 11:30:39.076688    8455 system_pods.go:61] "etcd-addons-565000" [94c45814-5893-4ee7-9167-283943b85079] Running
	I0906 11:30:39.076692    8455 system_pods.go:61] "kube-apiserver-addons-565000" [a9cc7a60-afe4-4958-9310-4680206a7c8d] Running
	I0906 11:30:39.076695    8455 system_pods.go:61] "kube-controller-manager-addons-565000" [32aa0352-4089-4b7f-b4bb-c7a2bfab169a] Running
	I0906 11:30:39.076697    8455 system_pods.go:61] "kube-ingress-dns-minikube" [39524e97-a112-44d1-9b4b-fb721afeef8b] Running
	I0906 11:30:39.076700    8455 system_pods.go:61] "kube-proxy-ngbg9" [5ec63286-71d3-40f9-a90a-90e231b9fb68] Running
	I0906 11:30:39.076703    8455 system_pods.go:61] "kube-scheduler-addons-565000" [e2b538c2-7781-4d08-8478-7482c429c98e] Running
	I0906 11:30:39.076706    8455 system_pods.go:61] "metrics-server-84c5f94fbc-s7gvk" [97f3e6bc-d390-4600-b657-42ea5558f40a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 11:30:39.076710    8455 system_pods.go:61] "nvidia-device-plugin-daemonset-2d5x4" [1aa1523c-25e2-4776-be7b-082ac60d2875] Running
	I0906 11:30:39.076713    8455 system_pods.go:61] "registry-6fb4cdfc84-w9b9z" [9a2c23b0-9024-42ce-9924-42afdfdbc0de] Running
	I0906 11:30:39.076716    8455 system_pods.go:61] "registry-proxy-75mvw" [1241891a-f5f4-4e10-ad7b-3c7977e9bb11] Running
	I0906 11:30:39.076721    8455 system_pods.go:61] "snapshot-controller-56fcc65765-278m8" [6787b10c-0be2-4164-8a9b-3afbcce0c71b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.076727    8455 system_pods.go:61] "snapshot-controller-56fcc65765-4ljgq" [500841ed-6cb6-4a7a-87bd-4242010b2e9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.076730    8455 system_pods.go:61] "storage-provisioner" [2522ffe3-2a6a-4185-9d41-edaef41562e4] Running
	I0906 11:30:39.076733    8455 system_pods.go:61] "tiller-deploy-b48cc5f79-xhknj" [0749425f-a61d-4e70-9b16-7d3819962a26] Running
	I0906 11:30:39.076738    8455 system_pods.go:74] duration metric: took 181.121835ms to wait for pod list to return data ...
	I0906 11:30:39.076743    8455 default_sa.go:34] waiting for default service account to be created ...
	I0906 11:30:39.203780    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:39.234430    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:39.271315    8455 default_sa.go:45] found service account: "default"
	I0906 11:30:39.271329    8455 default_sa.go:55] duration metric: took 194.582184ms for default service account to be created ...
	I0906 11:30:39.271339    8455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 11:30:39.476332    8455 system_pods.go:86] 19 kube-system pods found
	I0906 11:30:39.476348    8455 system_pods.go:89] "coredns-6f6b679f8f-jjpz5" [cb713a3d-2e0e-4205-9273-5b2a6393fe7e] Running
	I0906 11:30:39.476353    8455 system_pods.go:89] "coredns-6f6b679f8f-k8jth" [2bad9a21-c9b3-41db-ba12-73a2a482ea5f] Running
	I0906 11:30:39.476357    8455 system_pods.go:89] "csi-hostpath-attacher-0" [32c64849-b5bc-4889-b0bd-bff533458c95] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 11:30:39.476361    8455 system_pods.go:89] "csi-hostpath-resizer-0" [35ca7619-3cef-4ad1-886a-73bfe39cfdc9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 11:30:39.476366    8455 system_pods.go:89] "csi-hostpathplugin-s2s7r" [abafc424-abbe-4002-9dea-b3e02be0fbf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 11:30:39.476373    8455 system_pods.go:89] "etcd-addons-565000" [94c45814-5893-4ee7-9167-283943b85079] Running
	I0906 11:30:39.476390    8455 system_pods.go:89] "kube-apiserver-addons-565000" [a9cc7a60-afe4-4958-9310-4680206a7c8d] Running
	I0906 11:30:39.476397    8455 system_pods.go:89] "kube-controller-manager-addons-565000" [32aa0352-4089-4b7f-b4bb-c7a2bfab169a] Running
	I0906 11:30:39.476407    8455 system_pods.go:89] "kube-ingress-dns-minikube" [39524e97-a112-44d1-9b4b-fb721afeef8b] Running
	I0906 11:30:39.476412    8455 system_pods.go:89] "kube-proxy-ngbg9" [5ec63286-71d3-40f9-a90a-90e231b9fb68] Running
	I0906 11:30:39.476415    8455 system_pods.go:89] "kube-scheduler-addons-565000" [e2b538c2-7781-4d08-8478-7482c429c98e] Running
	I0906 11:30:39.476420    8455 system_pods.go:89] "metrics-server-84c5f94fbc-s7gvk" [97f3e6bc-d390-4600-b657-42ea5558f40a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 11:30:39.476424    8455 system_pods.go:89] "nvidia-device-plugin-daemonset-2d5x4" [1aa1523c-25e2-4776-be7b-082ac60d2875] Running
	I0906 11:30:39.476427    8455 system_pods.go:89] "registry-6fb4cdfc84-w9b9z" [9a2c23b0-9024-42ce-9924-42afdfdbc0de] Running
	I0906 11:30:39.476430    8455 system_pods.go:89] "registry-proxy-75mvw" [1241891a-f5f4-4e10-ad7b-3c7977e9bb11] Running
	I0906 11:30:39.476434    8455 system_pods.go:89] "snapshot-controller-56fcc65765-278m8" [6787b10c-0be2-4164-8a9b-3afbcce0c71b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.476440    8455 system_pods.go:89] "snapshot-controller-56fcc65765-4ljgq" [500841ed-6cb6-4a7a-87bd-4242010b2e9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 11:30:39.476444    8455 system_pods.go:89] "storage-provisioner" [2522ffe3-2a6a-4185-9d41-edaef41562e4] Running
	I0906 11:30:39.476448    8455 system_pods.go:89] "tiller-deploy-b48cc5f79-xhknj" [0749425f-a61d-4e70-9b16-7d3819962a26] Running
	I0906 11:30:39.476453    8455 system_pods.go:126] duration metric: took 205.110266ms to wait for k8s-apps to be running ...
	I0906 11:30:39.476462    8455 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 11:30:39.476514    8455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:30:39.488009    8455 system_svc.go:56] duration metric: took 11.543084ms WaitForService to wait for kubelet
	I0906 11:30:39.488032    8455 kubeadm.go:582] duration metric: took 43.067199604s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 11:30:39.488044    8455 node_conditions.go:102] verifying NodePressure condition ...
	I0906 11:30:39.671936    8455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 11:30:39.671955    8455 node_conditions.go:123] node cpu capacity is 2
	I0906 11:30:39.671963    8455 node_conditions.go:105] duration metric: took 183.916079ms to run NodePressure ...
	I0906 11:30:39.671971    8455 start.go:241] waiting for startup goroutines ...
	I0906 11:30:39.703638    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:39.735064    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:40.268771    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:40.268922    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:40.705072    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:40.736475    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:41.206842    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:41.237376    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:41.703621    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:41.733946    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:42.207141    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:42.234295    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:42.704268    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:42.736920    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:43.204271    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:43.233552    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:43.704506    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:43.735026    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:44.205614    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:44.233707    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:44.704800    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:44.733749    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:45.205355    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:45.233623    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:45.703714    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:45.734341    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:46.207271    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:46.235224    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:46.703567    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:46.734418    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:47.204625    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:47.234864    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:47.704765    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:47.733579    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:48.204068    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:48.235115    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:48.704263    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:48.733636    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:49.205419    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:49.241936    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:49.704811    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:49.734045    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:50.204568    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:50.236082    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:50.703854    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:50.735363    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:51.206236    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:51.236495    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:51.707129    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:51.734259    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:52.204576    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:52.235116    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:52.704268    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:52.733841    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:53.203899    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:53.233990    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:53.704342    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:53.735162    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:54.204532    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:54.235844    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:54.706937    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:54.737422    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:55.203449    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:55.238555    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:55.706058    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:55.734489    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:56.203939    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:56.234295    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:56.704296    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:56.735555    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:57.203199    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:57.235753    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:57.704524    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:57.734332    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:58.203717    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:58.235780    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:58.704723    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:58.733918    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:59.203672    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:59.233488    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:30:59.704120    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:30:59.805685    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:00.204681    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:00.235144    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:00.706503    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:00.734871    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:01.204267    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:01.235365    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:01.703709    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:01.734124    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:02.203540    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:02.235837    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:02.708414    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:02.737377    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:03.205373    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:03.235921    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:03.706007    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:03.735619    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:04.203984    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:04.234288    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:04.704218    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:04.734423    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:05.203728    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:05.234481    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:05.703859    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:05.734201    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:06.203434    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:06.235977    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:06.705343    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:06.734854    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:07.203363    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:07.234475    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:07.704248    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:07.734330    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:08.203410    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:08.234172    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:08.703689    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:08.737380    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:09.205875    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:09.235651    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:09.703703    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:09.735137    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:10.203782    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:10.236135    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:10.704555    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:10.735942    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:11.204419    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:11.234612    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:11.703773    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:11.734688    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:12.204201    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:12.234606    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:12.704284    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:12.733826    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:13.206776    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:13.236507    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:13.704196    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:13.734845    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:14.204903    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:14.234475    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:14.703996    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:14.735384    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:15.205695    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:15.234663    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:15.705438    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:15.734494    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:16.205536    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:16.236202    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:16.705633    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:16.735165    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:17.204190    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:17.233719    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:17.704463    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:17.734113    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:18.204646    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:18.234919    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:18.705247    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:18.734739    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:19.205924    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:19.234703    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:19.704093    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:19.734859    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:20.204088    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:20.234601    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:20.704423    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:20.733910    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:21.205399    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:21.234683    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:21.703925    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:21.734396    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:22.203905    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:22.234151    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:22.703780    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:22.734060    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:23.204855    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:23.233860    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:23.703915    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:23.733847    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:24.203712    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:24.234122    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:24.703564    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:24.733634    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:25.204987    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:25.234667    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:25.703408    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:25.733780    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:26.204004    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:26.233606    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:26.703620    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:26.733901    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:27.203651    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:27.234534    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:27.704534    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:27.733944    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:28.206462    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:28.236101    8455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 11:31:28.703892    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:28.734231    8455 kapi.go:107] duration metric: took 1m23.503011028s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 11:31:29.229069    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:29.704294    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:30.203646    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:30.704201    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:31.204107    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:31.704354    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:32.204544    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:32.706333    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:33.207256    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:33.703920    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:34.204571    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:34.704469    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:35.204144    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:35.705153    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:36.203780    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:36.703991    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:37.203575    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:37.706037    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:38.208562    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:38.703757    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:39.203579    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:39.704238    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:40.205533    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 11:31:40.704962    8455 kapi.go:107] duration metric: took 1m33.00432836s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 11:32:53.933629    8455 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 11:32:53.933641    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:54.439721    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:54.937455    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:55.434033    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:55.933203    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:56.435390    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:56.934517    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:57.433420    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:57.933847    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:58.434214    8455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 11:32:58.934033    8455 kapi.go:107] duration metric: took 2m50.003085598s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 11:32:58.974535    8455 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-565000 cluster.
	I0906 11:32:59.014908    8455 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 11:32:59.036173    8455 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 11:32:59.093940    8455 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, helm-tiller, storage-provisioner, default-storageclass, metrics-server, yakd, storage-provisioner-rancher, volcano, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0906 11:32:59.136054    8455 addons.go:510] duration metric: took 3m2.71561894s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns helm-tiller storage-provisioner default-storageclass metrics-server yakd storage-provisioner-rancher volcano inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0906 11:32:59.136089    8455 start.go:246] waiting for cluster config update ...
	I0906 11:32:59.136114    8455 start.go:255] writing updated cluster config ...
	I0906 11:32:59.136473    8455 ssh_runner.go:195] Run: rm -f paused
	I0906 11:32:59.177790    8455 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0906 11:32:59.215143    8455 out.go:201] 
	W0906 11:32:59.236123    8455 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0906 11:32:59.257068    8455 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0906 11:32:59.336179    8455 out.go:177] * Done! kubectl is now configured to use "addons-565000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 06 18:45:31 addons-565000 dockerd[1266]: time="2024-09-06T18:45:31.858715723Z" level=warning msg="cleanup warnings time=\"2024-09-06T18:45:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.248147525Z" level=info msg="shim disconnected" id=485efc75aa16c4fe94edc1b78ff3290748ae46f678c4617f7bc1d4d57fd63e11 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.248217244Z" level=warning msg="cleaning up after shim disconnected" id=485efc75aa16c4fe94edc1b78ff3290748ae46f678c4617f7bc1d4d57fd63e11 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.248227354Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1259]: time="2024-09-06T18:45:32.248756617Z" level=info msg="ignoring event" container=485efc75aa16c4fe94edc1b78ff3290748ae46f678c4617f7bc1d4d57fd63e11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:45:32 addons-565000 dockerd[1259]: time="2024-09-06T18:45:32.273036593Z" level=info msg="ignoring event" container=a9968173210fcb266173019db1f0866ecebd7ea746abfeccf0846822e092fb85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.273303715Z" level=info msg="shim disconnected" id=a9968173210fcb266173019db1f0866ecebd7ea746abfeccf0846822e092fb85 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.273355172Z" level=warning msg="cleaning up after shim disconnected" id=a9968173210fcb266173019db1f0866ecebd7ea746abfeccf0846822e092fb85 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.273364578Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1259]: time="2024-09-06T18:45:32.472165538Z" level=info msg="ignoring event" container=38e6f4bd9c83e0645146684e31d9f2ae553a280b5bc469f7f5872c06974ef164 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.472844276Z" level=info msg="shim disconnected" id=38e6f4bd9c83e0645146684e31d9f2ae553a280b5bc469f7f5872c06974ef164 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.473121192Z" level=warning msg="cleaning up after shim disconnected" id=38e6f4bd9c83e0645146684e31d9f2ae553a280b5bc469f7f5872c06974ef164 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.473162920Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.528434162Z" level=info msg="shim disconnected" id=da3825162e3ae73532777736ff508041f09f042b55255a45f03c5ea2f976a709 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1259]: time="2024-09-06T18:45:32.528797969Z" level=info msg="ignoring event" container=da3825162e3ae73532777736ff508041f09f042b55255a45f03c5ea2f976a709 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.529142390Z" level=warning msg="cleaning up after shim disconnected" id=da3825162e3ae73532777736ff508041f09f042b55255a45f03c5ea2f976a709 namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.529181087Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.830340870Z" level=info msg="shim disconnected" id=2cc88a4bc27ae9e49043b9ca1d98102e30d00a6ede8f8159c248e5ac9ff77faf namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1259]: time="2024-09-06T18:45:32.830879294Z" level=info msg="ignoring event" container=2cc88a4bc27ae9e49043b9ca1d98102e30d00a6ede8f8159c248e5ac9ff77faf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.831068406Z" level=warning msg="cleaning up after shim disconnected" id=2cc88a4bc27ae9e49043b9ca1d98102e30d00a6ede8f8159c248e5ac9ff77faf namespace=moby
	Sep 06 18:45:32 addons-565000 dockerd[1266]: time="2024-09-06T18:45:32.831184348Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:45:33 addons-565000 dockerd[1266]: time="2024-09-06T18:45:33.070281986Z" level=info msg="shim disconnected" id=f9d5e7e422ccc53b6a3d7b1155f6c0a4dcf5040ebc40cf51d2b396d10212988e namespace=moby
	Sep 06 18:45:33 addons-565000 dockerd[1266]: time="2024-09-06T18:45:33.071490797Z" level=warning msg="cleaning up after shim disconnected" id=f9d5e7e422ccc53b6a3d7b1155f6c0a4dcf5040ebc40cf51d2b396d10212988e namespace=moby
	Sep 06 18:45:33 addons-565000 dockerd[1266]: time="2024-09-06T18:45:33.071526049Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 18:45:33 addons-565000 dockerd[1259]: time="2024-09-06T18:45:33.071798784Z" level=info msg="ignoring event" container=f9d5e7e422ccc53b6a3d7b1155f6c0a4dcf5040ebc40cf51d2b396d10212988e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dbf74c93f3f38       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            3 minutes ago       Exited              gadget                    7                   8c88e518cfacb       gadget-6j9m6
	d5be48ce48be9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 12 minutes ago      Running             gcp-auth                  0                   f4b2c509afcb5       gcp-auth-89d5ffd79-9jkbf
	fad5fad078b35       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                         14 minutes ago      Running             admission                 0                   ca0dd3e61b34e       volcano-admission-77d7d48b68-tmx68
	e8ddd9c18d912       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             14 minutes ago      Running             controller                0                   0bb8dda5e2a3f       ingress-nginx-controller-bc57996ff-kttz2
	849ab3f4ae24c       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                               14 minutes ago      Running             volcano-scheduler         0                   23639f2b44b93       volcano-scheduler-576bc46687-k5qvm
	9d61e349941da       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                      14 minutes ago      Running             volcano-controllers       0                   9889d6d33187c       volcano-controllers-56675bb4d5-tntd9
	c6b4843f2ae98       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   14 minutes ago      Exited              patch                     0                   cf989fa0fd2dd       ingress-nginx-admission-patch-4nkbp
	d2ef4f5201e40       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   14 minutes ago      Exited              create                    0                   13179db06ca9b       ingress-nginx-admission-create-5gcgq
	f5fc50cfeff28       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       14 minutes ago      Running             local-path-provisioner    0                   e50b6524aca80       local-path-provisioner-86d989889c-tsbl5
	f4ae218923a64       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        14 minutes ago      Running             metrics-server            0                   1fef8a40c6e8b       metrics-server-84c5f94fbc-s7gvk
	a0eac9b63ba60       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  15 minutes ago      Running             tiller                    0                   6fffd90033024       tiller-deploy-b48cc5f79-xhknj
	17f6796db505f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             15 minutes ago      Running             minikube-ingress-dns      0                   234eb79485cd1       kube-ingress-dns-minikube
	1c626a3b93f82       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               15 minutes ago      Running             cloud-spanner-emulator    0                   8d5b732ac553c       cloud-spanner-emulator-769b77f747-gkpvx
	ca501c42a673b       6e38f40d628db                                                                                                                15 minutes ago      Running             storage-provisioner       0                   ee35656ff4dab       storage-provisioner
	96b0436049bfd       cbb01a7bd410d                                                                                                                15 minutes ago      Running             coredns                   0                   50942594ce346       coredns-6f6b679f8f-jjpz5
	3e68275cea827       cbb01a7bd410d                                                                                                                15 minutes ago      Running             coredns                   0                   10e305e405cb8       coredns-6f6b679f8f-k8jth
	ec9f84a5d059c       ad83b2ca7b09e                                                                                                                15 minutes ago      Running             kube-proxy                0                   6aef182d0c233       kube-proxy-ngbg9
	df1ae4949b890       604f5db92eaa8                                                                                                                15 minutes ago      Running             kube-apiserver            0                   eed72b2e0cb8b       kube-apiserver-addons-565000
	7491eeec20a6a       1766f54c897f0                                                                                                                15 minutes ago      Running             kube-scheduler            0                   714f1f244b199       kube-scheduler-addons-565000
	2f4f1dc29a3a5       045733566833c                                                                                                                15 minutes ago      Running             kube-controller-manager   0                   eca5c27fe48ba       kube-controller-manager-addons-565000
	bd4a544e47c77       2e96e5913fc06                                                                                                                15 minutes ago      Running             etcd                      0                   44c5486d095c1       etcd-addons-565000
	
	
	==> controller_ingress [e8ddd9c18d91] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0906 18:31:27.840683       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0906 18:31:27.840779       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0906 18:31:27.845119       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/amd64"
	I0906 18:31:28.058295       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0906 18:31:28.072348       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0906 18:31:28.078984       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0906 18:31:28.090015       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5108b76c-d002-4edb-be2c-40002f23dad1", APIVersion:"v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0906 18:31:28.091406       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"e715ab1b-f17f-4030-bc42-c6415657141d", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0906 18:31:28.091694       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"2c1d064e-0801-4483-a6f9-badd406a30d8", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0906 18:31:29.281363       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0906 18:31:29.281477       7 nginx.go:317] "Starting NGINX process"
	I0906 18:31:29.281807       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0906 18:31:29.281942       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0906 18:31:29.292146       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0906 18:31:29.292482       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-kttz2"
	I0906 18:31:29.297386       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-kttz2" node="addons-565000"
	I0906 18:31:29.315102       7 controller.go:213] "Backend successfully reloaded"
	I0906 18:31:29.315175       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0906 18:31:29.315377       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-kttz2", UID:"e7422f4d-ede8-4441-ad82-d74b6e64868a", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [3e68275cea82] <==
	Trace[887105750]: [30.0015111s] [30.0015111s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1399447648]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.825) (total time: 30001ms):
	Trace[1399447648]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:30:28.825)
	Trace[1399447648]: [30.001703327s] [30.001703327s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[281883191]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.825) (total time: 30001ms):
	Trace[281883191]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.826)
	Trace[281883191]: [30.001720152s] [30.001720152s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:53734 - 19685 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122631s
	[INFO] 10.244.0.8:53734 - 51175 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074673s
	[INFO] 10.244.0.8:53020 - 19533 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100379s
	[INFO] 10.244.0.8:53020 - 14156 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004184s
	[INFO] 10.244.0.8:60138 - 43494 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006767s
	[INFO] 10.244.0.8:60138 - 65253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000036928s
	[INFO] 10.244.0.8:60822 - 7923 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108181s
	[INFO] 10.244.0.8:60822 - 15857 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000045526s
	[INFO] 10.244.0.8:38552 - 55881 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037007s
	[INFO] 10.244.0.8:38552 - 22607 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000021723s
	[INFO] 10.244.0.26:46025 - 53921 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000208202s
	[INFO] 10.244.0.26:34684 - 54921 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000162793s
	[INFO] 10.244.0.26:56877 - 37503 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000979085s
	
	
	==> coredns [96b0436049bf] <==
	[INFO] plugin/kubernetes: Trace[868897144]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.985) (total time: 30001ms):
	Trace[868897144]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.986)
	Trace[868897144]: [30.001513479s] [30.001513479s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1446559473]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.985) (total time: 30001ms):
	Trace[1446559473]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.986)
	Trace[1446559473]: [30.001200251s] [30.001200251s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[3925288]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 18:29:58.985) (total time: 30001ms):
	Trace[3925288]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:30:28.987)
	Trace[3925288]: [30.001663646s] [30.001663646s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:46408 - 4699 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142282s
	[INFO] 10.244.0.8:46408 - 41029 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047596s
	[INFO] 10.244.0.8:39697 - 48378 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070047s
	[INFO] 10.244.0.8:39697 - 53501 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042703s
	[INFO] 10.244.0.8:43424 - 3655 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079105s
	[INFO] 10.244.0.8:43424 - 37445 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043667s
	[INFO] 10.244.0.26:54180 - 6536 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000180728s
	[INFO] 10.244.0.26:57220 - 58876 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084468s
	[INFO] 10.244.0.26:58974 - 20348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111158s
	[INFO] 10.244.0.26:59652 - 30528 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000047741s
	[INFO] 10.244.0.26:51541 - 50008 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00055438s
	
	
	==> describe nodes <==
	Name:               addons-565000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-565000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-565000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_29_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-565000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:29:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-565000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:45:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:44:59 +0000   Fri, 06 Sep 2024 18:29:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:44:59 +0000   Fri, 06 Sep 2024 18:29:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:44:59 +0000   Fri, 06 Sep 2024 18:29:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:44:59 +0000   Fri, 06 Sep 2024 18:29:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.21
	  Hostname:    addons-565000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912944Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba8d7ebbe40f4db881630d84cd9f9b07
	  System UUID:                a75d4f7d-0000-0000-aa62-647125e97870
	  Boot ID:                    43c43a22-afbf-4d48-a2ee-e3ce87181c76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-gkpvx                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  gadget                      gadget-6j9m6                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  gcp-auth                    gcp-auth-89d5ffd79-9jkbf                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-kttz2                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         15m
	  kube-system                 coredns-6f6b679f8f-jjpz5                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 coredns-6f6b679f8f-k8jth                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-565000                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-565000                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-565000                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-ngbg9                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-565000                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-s7gvk                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 tiller-deploy-b48cc5f79-xhknj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-86d989889c-tsbl5                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  volcano-system              volcano-admission-77d7d48b68-tmx68                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  volcano-system              volcano-controllers-56675bb4d5-tntd9                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  volcano-system              volcano-scheduler-576bc46687-k5qvm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  0 (0%)
	  memory             530Mi (13%)  340Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-565000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-565000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-565000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-565000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-565000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-565000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-565000 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-565000 event: Registered Node addons-565000 in Controller
	
	
	==> dmesg <==
	[  +5.016253] kauditd_printk_skb: 141 callbacks suppressed
	[  +9.031633] kauditd_printk_skb: 80 callbacks suppressed
	[ +14.066384] kauditd_printk_skb: 9 callbacks suppressed
	[ +11.144245] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.012317] kauditd_printk_skb: 3 callbacks suppressed
	[  +6.162700] kauditd_printk_skb: 31 callbacks suppressed
	[  +7.139234] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:31] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.773109] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.780824] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.892089] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.547903] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 28 callbacks suppressed
	[ +41.821165] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.603321] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 6 18:36] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:37] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:42] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:44] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.087352] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.843089] kauditd_printk_skb: 2 callbacks suppressed
	[Sep 6 18:45] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.866659] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.611418] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.469956] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [bd4a544e47c7] <==
	{"level":"info","ts":"2024-09-06T18:30:05.400799Z","caller":"traceutil/trace.go:171","msg":"trace[708230896] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"206.096247ms","start":"2024-09-06T18:30:05.194698Z","end":"2024-09-06T18:30:05.400794Z","steps":["trace[708230896] 'process raft request'  (duration: 147.722312ms)","trace[708230896] 'compare'  (duration: 58.192431ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:30:05.400887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.242992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:05.400901Z","caller":"traceutil/trace.go:171","msg":"trace[95253260] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:741; }","duration":"197.264939ms","start":"2024-09-06T18:30:05.203632Z","end":"2024-09-06T18:30:05.400897Z","steps":["trace[95253260] 'agreement among raft nodes before linearized reading'  (duration: 197.234026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:05.401920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.776734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx\" ","response":"range_response_count:1 size:1009"}
	{"level":"info","ts":"2024-09-06T18:30:05.401941Z","caller":"traceutil/trace.go:171","msg":"trace[1880089099] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx; range_end:; response_count:1; response_revision:742; }","duration":"145.799915ms","start":"2024-09-06T18:30:05.256136Z","end":"2024-09-06T18:30:05.401936Z","steps":["trace[1880089099] 'agreement among raft nodes before linearized reading'  (duration: 145.688306ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:05.402014Z","caller":"traceutil/trace.go:171","msg":"trace[1986133511] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"145.808796ms","start":"2024-09-06T18:30:05.256200Z","end":"2024-09-06T18:30:05.402008Z","steps":["trace[1986133511] 'process raft request'  (duration: 145.590545ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:05.402370Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.317482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-6f6b679f8f-jjpz5\" ","response":"range_response_count:1 size:5091"}
	{"level":"info","ts":"2024-09-06T18:30:05.402384Z","caller":"traceutil/trace.go:171","msg":"trace[764227157] range","detail":"{range_begin:/registry/pods/kube-system/coredns-6f6b679f8f-jjpz5; range_end:; response_count:1; response_revision:742; }","duration":"116.332238ms","start":"2024-09-06T18:30:05.286048Z","end":"2024-09-06T18:30:05.402380Z","steps":["trace[764227157] 'agreement among raft nodes before linearized reading'  (duration: 116.307035ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:07.838065Z","caller":"traceutil/trace.go:171","msg":"trace[1244408162] transaction","detail":"{read_only:false; response_revision:929; number_of_response:1; }","duration":"147.065686ms","start":"2024-09-06T18:30:07.690989Z","end":"2024-09-06T18:30:07.838055Z","steps":["trace[1244408162] 'process raft request'  (duration: 145.182017ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:07.838385Z","caller":"traceutil/trace.go:171","msg":"trace[1088116875] transaction","detail":"{read_only:false; response_revision:930; number_of_response:1; }","duration":"147.162007ms","start":"2024-09-06T18:30:07.691218Z","end":"2024-09-06T18:30:07.838380Z","steps":["trace[1088116875] 'process raft request'  (duration: 147.10043ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:07.838532Z","caller":"traceutil/trace.go:171","msg":"trace[1157367360] transaction","detail":"{read_only:false; response_revision:931; number_of_response:1; }","duration":"147.273724ms","start":"2024-09-06T18:30:07.691254Z","end":"2024-09-06T18:30:07.838528Z","steps":["trace[1157367360] 'process raft request'  (duration: 147.088182ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:30:35.893117Z","caller":"traceutil/trace.go:171","msg":"trace[345980273] linearizableReadLoop","detail":"{readStateIndex:1085; appliedIndex:1084; }","duration":"118.48708ms","start":"2024-09-06T18:30:35.774616Z","end":"2024-09-06T18:30:35.893103Z","steps":["trace[345980273] 'read index received'  (duration: 118.349163ms)","trace[345980273] 'applied index is now lower than readState.Index'  (duration: 137.226µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:30:35.893231Z","caller":"traceutil/trace.go:171","msg":"trace[256057286] transaction","detail":"{read_only:false; response_revision:1061; number_of_response:1; }","duration":"118.810799ms","start":"2024-09-06T18:30:35.774413Z","end":"2024-09-06T18:30:35.893224Z","steps":["trace[256057286] 'process raft request'  (duration: 118.567955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:30:35.893386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.76113ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:30:35.893411Z","caller":"traceutil/trace.go:171","msg":"trace[1103130949] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1061; }","duration":"118.790592ms","start":"2024-09-06T18:30:35.774614Z","end":"2024-09-06T18:30:35.893404Z","steps":["trace[1103130949] 'agreement among raft nodes before linearized reading'  (duration: 118.753414ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:39:48.024605Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1744}
	{"level":"info","ts":"2024-09-06T18:39:48.073351Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1744,"took":"47.316907ms","hash":663844247,"current-db-size-bytes":8335360,"current-db-size":"8.3 MB","current-db-size-in-use-bytes":4448256,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-06T18:39:48.073643Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":663844247,"revision":1744,"compact-revision":-1}
	{"level":"info","ts":"2024-09-06T18:44:44.896184Z","caller":"traceutil/trace.go:171","msg":"trace[1871266957] transaction","detail":"{read_only:false; response_revision:2638; number_of_response:1; }","duration":"124.89299ms","start":"2024-09-06T18:44:44.771283Z","end":"2024-09-06T18:44:44.896176Z","steps":["trace[1871266957] 'process raft request'  (duration: 124.479722ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:44:44.897554Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.463719ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:44:44.897624Z","caller":"traceutil/trace.go:171","msg":"trace[1417800252] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2638; }","duration":"123.522296ms","start":"2024-09-06T18:44:44.774069Z","end":"2024-09-06T18:44:44.897591Z","steps":["trace[1417800252] 'agreement among raft nodes before linearized reading'  (duration: 123.43605ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:44:44.898800Z","caller":"traceutil/trace.go:171","msg":"trace[1881837133] linearizableReadLoop","detail":"{readStateIndex:2847; appliedIndex:2846; }","duration":"121.949063ms","start":"2024-09-06T18:44:44.774074Z","end":"2024-09-06T18:44:44.896023Z","steps":["trace[1881837133] 'read index received'  (duration: 119.565887ms)","trace[1881837133] 'applied index is now lower than readState.Index'  (duration: 2.382664ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:44:48.028184Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2200}
	{"level":"info","ts":"2024-09-06T18:44:48.042412Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2200,"took":"13.752391ms","hash":2685747484,"current-db-size-bytes":8335360,"current-db-size":"8.3 MB","current-db-size-in-use-bytes":3239936,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-06T18:44:48.042550Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2685747484,"revision":2200,"compact-revision":1744}
	
	
	==> gcp-auth [d5be48ce48be] <==
	2024/09/06 18:32:58 GCP Auth Webhook started!
	2024/09/06 18:33:14 Ready to marshal response ...
	2024/09/06 18:33:14 Ready to write response ...
	2024/09/06 18:33:15 Ready to marshal response ...
	2024/09/06 18:33:15 Ready to write response ...
	2024/09/06 18:36:18 Ready to marshal response ...
	2024/09/06 18:36:18 Ready to write response ...
	2024/09/06 18:36:18 Ready to marshal response ...
	2024/09/06 18:36:18 Ready to write response ...
	2024/09/06 18:36:18 Ready to marshal response ...
	2024/09/06 18:36:18 Ready to write response ...
	2024/09/06 18:44:31 Ready to marshal response ...
	2024/09/06 18:44:31 Ready to write response ...
	2024/09/06 18:44:38 Ready to marshal response ...
	2024/09/06 18:44:38 Ready to write response ...
	2024/09/06 18:45:00 Ready to marshal response ...
	2024/09/06 18:45:00 Ready to write response ...
	2024/09/06 18:45:33 Ready to marshal response ...
	2024/09/06 18:45:33 Ready to write response ...
	2024/09/06 18:45:33 Ready to marshal response ...
	2024/09/06 18:45:33 Ready to write response ...
	
	
	==> kernel <==
	 18:45:34 up 16 min,  0 users,  load average: 1.18, 0.47, 0.39
	Linux addons-565000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [df1ae4949b89] <==
	W0906 18:31:32.403298       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:33.415313       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:31:34.425178       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.184.68:443: connect: connection refused
	W0906 18:32:11.924032       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:32:11.924079       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:11.983798       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:32:11.984126       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:53.779967       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.150.208:443: connect: connection refused
	E0906 18:32:53.780057       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.150.208:443: connect: connection refused" logger="UnhandledError"
	I0906 18:33:14.880747       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0906 18:33:14.895991       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0906 18:44:51.916072       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:45:15.617156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:45:15.617225       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:45:15.631183       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:45:15.631233       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:45:15.640466       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:45:15.640800       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:45:15.731520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:45:15.731541       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:45:15.761281       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:45:15.762015       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:45:16.733120       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:45:16.762790       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 18:45:16.773999       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [2f4f1dc29a3a] <==
	W0906 18:45:18.115238       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:18.115335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:45:19.422451       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:19.422510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:45:19.814518       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:19.814598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:45:20.782101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:20.782212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:45:21.331892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.87µs"
	W0906 18:45:23.303004       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:23.303075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:45:25.147386       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:25.147447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:45:26.360966       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:26.361007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:45:26.473151       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0906 18:45:26.473239       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:45:26.684408       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0906 18:45:26.684551       1 shared_informer.go:320] Caches are synced for garbage collector
	W0906 18:45:30.567052       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:30.567244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:45:31.393909       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0906 18:45:32.171229       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="2.874µs"
	W0906 18:45:32.235893       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:32.235921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [ec9f84a5d059] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:29:58.755524       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:29:58.785806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.21"]
	E0906 18:29:58.785875       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:29:58.923560       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:29:58.923609       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:29:58.923627       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:29:58.936971       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:29:58.937151       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:29:58.937159       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:29:58.938393       1 config.go:197] "Starting service config controller"
	I0906 18:29:58.938409       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:29:58.938423       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:29:58.938426       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:29:58.949921       1 config.go:326] "Starting node config controller"
	I0906 18:29:58.949931       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:29:59.038845       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:29:59.038875       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:29:59.050218       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7491eeec20a6] <==
	W0906 18:29:48.007678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:29:48.007861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.007876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:29:48.008041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.007966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 18:29:48.008215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.008102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:48.008440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.922727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 18:29:48.922876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.948745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:29:48.948964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:48.996465       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:29:48.996631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.015909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:29:49.015967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.022235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:49.022302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.052321       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:29:49.052365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.112247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:29:49.112297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:29:49.218398       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:29:49.218461       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 18:29:51.293148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:45:33 addons-565000 kubelet[2042]: E0906 18:45:33.311641    2042 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2c23b0-9024-42ce-9924-42afdfdbc0de" containerName="registry"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311669    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed8757b-de74-4154-b3a6-cbe152481ab1" containerName="yakd"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311675    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ca7619-3cef-4ad1-886a-73bfe39cfdc9" containerName="csi-resizer"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311680    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="abafc424-abbe-4002-9dea-b3e02be0fbf0" containerName="liveness-probe"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311684    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="1241891a-f5f4-4e10-ad7b-3c7977e9bb11" containerName="registry-proxy"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311687    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="abafc424-abbe-4002-9dea-b3e02be0fbf0" containerName="node-driver-registrar"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311691    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2c23b0-9024-42ce-9924-42afdfdbc0de" containerName="registry"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311695    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="abafc424-abbe-4002-9dea-b3e02be0fbf0" containerName="csi-provisioner"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311698    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="500841ed-6cb6-4a7a-87bd-4242010b2e9b" containerName="volume-snapshot-controller"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311702    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="6787b10c-0be2-4164-8a9b-3afbcce0c71b" containerName="volume-snapshot-controller"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311706    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c64849-b5bc-4889-b0bd-bff533458c95" containerName="csi-attacher"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311709    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="abafc424-abbe-4002-9dea-b3e02be0fbf0" containerName="csi-external-health-monitor-controller"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311713    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="abafc424-abbe-4002-9dea-b3e02be0fbf0" containerName="hostpath"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311716    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="abafc424-abbe-4002-9dea-b3e02be0fbf0" containerName="csi-snapshotter"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311720    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa1523c-25e2-4776-be7b-082ac60d2875" containerName="nvidia-device-plugin-ctr"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.311724    2042 memory_manager.go:354] "RemoveStaleState removing state" podUID="326e8c29-a8aa-4f17-82e9-9ff81a8a9f25" containerName="task-pv-container"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.344652    2042 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mzc7c\" (UniqueName: \"kubernetes.io/projected/1aa1523c-25e2-4776-be7b-082ac60d2875-kube-api-access-mzc7c\") on node \"addons-565000\" DevicePath \"\""
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.344673    2042 reconciler_common.go:288] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/1aa1523c-25e2-4776-be7b-082ac60d2875-device-plugin\") on node \"addons-565000\" DevicePath \"\""
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.445663    2042 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbp6l\" (UniqueName: \"kubernetes.io/projected/33ad7c53-69a2-41d2-87c3-248cc49d0e22-kube-api-access-dbp6l\") pod \"helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583\" (UID: \"33ad7c53-69a2-41d2-87c3-248cc49d0e22\") " pod="local-path-storage/helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.445765    2042 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/33ad7c53-69a2-41d2-87c3-248cc49d0e22-script\") pod \"helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583\" (UID: \"33ad7c53-69a2-41d2-87c3-248cc49d0e22\") " pod="local-path-storage/helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.445797    2042 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/33ad7c53-69a2-41d2-87c3-248cc49d0e22-data\") pod \"helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583\" (UID: \"33ad7c53-69a2-41d2-87c3-248cc49d0e22\") " pod="local-path-storage/helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.445825    2042 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33ad7c53-69a2-41d2-87c3-248cc49d0e22-gcp-creds\") pod \"helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583\" (UID: \"33ad7c53-69a2-41d2-87c3-248cc49d0e22\") " pod="local-path-storage/helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.571674    2042 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1241891a-f5f4-4e10-ad7b-3c7977e9bb11" path="/var/lib/kubelet/pods/1241891a-f5f4-4e10-ad7b-3c7977e9bb11/volumes"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.572039    2042 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aff690b-bd85-4e90-add4-46afa90f9e06" path="/var/lib/kubelet/pods/6aff690b-bd85-4e90-add4-46afa90f9e06/volumes"
	Sep 06 18:45:33 addons-565000 kubelet[2042]: I0906 18:45:33.572252    2042 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2c23b0-9024-42ce-9924-42afdfdbc0de" path="/var/lib/kubelet/pods/9a2c23b0-9024-42ce-9924-42afdfdbc0de/volumes"
	
	
	==> storage-provisioner [ca501c42a673] <==
	I0906 18:30:03.543389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:03.559445       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:03.559470       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:03.575576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:03.575729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-565000_2f68d27c-ee72-40a1-9b10-86450d95520c!
	I0906 18:30:03.576811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a969d9b-e992-4942-aff3-78e2d12b7575", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-565000_2f68d27c-ee72-40a1-9b10-86450d95520c became leader
	I0906 18:30:03.676508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-565000_2f68d27c-ee72-40a1-9b10-86450d95520c!
	E0906 18:45:08.700925       1 controller.go:1050] claim "025b2260-05a8-44ef-a049-b55643750705" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-565000 -n addons-565000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-565000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-565000 describe pod busybox test-local-path ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-565000 describe pod busybox test-local-path ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583 test-job-nginx-0: exit status 1 (63.29537ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-565000/192.169.0.21
	Start Time:       Fri, 06 Sep 2024 11:36:18 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v55tv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v55tv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-565000
	  Normal   Pulling    7m53s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m40s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvf8g (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zvf8g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5gcgq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4nkbp" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-565000 describe pod busybox test-local-path ingress-nginx-admission-create-5gcgq ingress-nginx-admission-patch-4nkbp helper-pod-create-pvc-9e845a53-9b9e-407e-94ba-aac6318fa583 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.53s)

                                                
                                    
x
+
TestCertOptions (251.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-730000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0906 12:46:59.969952    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:47:27.689166    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:47:56.587790    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:47:59.573123    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:48:13.506335    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-730000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m5.973824766s)

                                                
                                                
-- stdout --
	* [cert-options-730000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-730000" primary control-plane node in "cert-options-730000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-730000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 66:86:5e:17:61:52
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-730000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:37:26:90:a2:31
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:37:26:90:a2:31
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-730000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-730000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-730000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (162.741574ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-730000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-730000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-730000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-730000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-730000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (162.877406ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-730000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-730000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-730000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-06 12:49:53.094969 -0700 PDT m=+4870.205160654
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-730000 -n cert-options-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-730000 -n cert-options-730000: exit status 7 (80.722483ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:49:53.173878   14454 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:49:53.173905   14454 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-730000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-730000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-730000: (5.238656231s)
--- FAIL: TestCertOptions (251.66s)

                                                
                                    
x
+
TestCertExpiration (1730.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-618000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0906 12:44:43.847141    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-618000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.378691005s)

                                                
                                                
-- stdout --
	* [cert-expiration-618000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-618000" primary control-plane node in "cert-expiration-618000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-618000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for aa:c7:e0:7e:64:8
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-618000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:5d:d1:49:6b:23
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 56:5d:d1:49:6b:23
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-618000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-618000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0906 12:51:59.968753    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-618000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m39.269711596s)

                                                
                                                
-- stdout --
	* [cert-expiration-618000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-618000" primary control-plane node in "cert-expiration-618000" cluster
	* Updating the running hyperkit "cert-expiration-618000" VM ...
	* Updating the running hyperkit "cert-expiration-618000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-618000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-618000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-618000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-618000" primary control-plane node in "cert-expiration-618000" cluster
	* Updating the running hyperkit "cert-expiration-618000" VM ...
	* Updating the running hyperkit "cert-expiration-618000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-618000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-06 13:13:29.183632 -0700 PDT m=+6286.246274201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-618000 -n cert-expiration-618000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-618000 -n cert-expiration-618000: exit status 7 (81.761909ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 13:13:29.263314   15908 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 13:13:29.263335   15908 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-618000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-618000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-618000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-618000: (5.24896945s)
--- FAIL: TestCertExpiration (1730.98s)

                                                
                                    
x
+
TestDockerFlags (251.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-753000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0906 12:41:59.970983    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:41:59.978630    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:41:59.990043    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:00.013152    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:00.056529    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:00.138322    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:00.299797    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:00.623208    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:01.266661    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:02.548624    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:05.112076    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:10.235538    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:20.478949    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:40.961635    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:42:59.576230    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:43:13.508673    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:43:21.924741    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-753000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.142892666s)

                                                
                                                
-- stdout --
	* [docker-flags-753000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-753000" primary control-plane node in "docker-flags-753000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-753000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:41:34.777290   14256 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:41:34.777556   14256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:41:34.777563   14256 out.go:358] Setting ErrFile to fd 2...
	I0906 12:41:34.777567   14256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:41:34.777771   14256 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:41:34.779259   14256 out.go:352] Setting JSON to false
	I0906 12:41:34.801712   14256 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13265,"bootTime":1725638429,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:41:34.801809   14256 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:41:34.823287   14256 out.go:177] * [docker-flags-753000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:41:34.866177   14256 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:41:34.866198   14256 notify.go:220] Checking for updates...
	I0906 12:41:34.907020   14256 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:41:34.928021   14256 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:41:34.948846   14256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:41:34.969050   14256 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:41:34.990071   14256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:41:35.011342   14256 config.go:182] Loaded profile config "force-systemd-flag-489000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:41:35.011436   14256 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:41:35.040016   14256 out.go:177] * Using the hyperkit driver based on user configuration
	I0906 12:41:35.081836   14256 start.go:297] selected driver: hyperkit
	I0906 12:41:35.081851   14256 start.go:901] validating driver "hyperkit" against <nil>
	I0906 12:41:35.081863   14256 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:41:35.084935   14256 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:41:35.085063   14256 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:41:35.093750   14256 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:41:35.097680   14256 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:41:35.097702   14256 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:41:35.097736   14256 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:41:35.097943   14256 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0906 12:41:35.098005   14256 cni.go:84] Creating CNI manager for ""
	I0906 12:41:35.098019   14256 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:41:35.098026   14256 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:41:35.098099   14256 start.go:340] cluster config:
	{Name:docker-flags-753000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:41:35.098188   14256 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:41:35.140069   14256 out.go:177] * Starting "docker-flags-753000" primary control-plane node in "docker-flags-753000" cluster
	I0906 12:41:35.160821   14256 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:41:35.160853   14256 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:41:35.160867   14256 cache.go:56] Caching tarball of preloaded images
	I0906 12:41:35.160976   14256 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:41:35.160985   14256 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:41:35.161058   14256 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/docker-flags-753000/config.json ...
	I0906 12:41:35.161074   14256 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/docker-flags-753000/config.json: {Name:mk38cbf3d63b7e6e29d0cfc9bbb8be996fc574d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:41:35.161377   14256 start.go:360] acquireMachinesLock for docker-flags-753000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:42:31.917889   14256 start.go:364] duration metric: took 56.701996241s to acquireMachinesLock for "docker-flags-753000"
	I0906 12:42:31.917950   14256 start.go:93] Provisioning new machine with config: &{Name:docker-flags-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:docker-flags-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:42:31.918022   14256 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:42:31.939469   14256 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:42:31.939646   14256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:42:31.939684   14256 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:42:31.948210   14256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58153
	I0906 12:42:31.948551   14256 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:42:31.948977   14256 main.go:141] libmachine: Using API Version  1
	I0906 12:42:31.948987   14256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:42:31.949191   14256 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:42:31.949315   14256 main.go:141] libmachine: (docker-flags-753000) Calling .GetMachineName
	I0906 12:42:31.949411   14256 main.go:141] libmachine: (docker-flags-753000) Calling .DriverName
	I0906 12:42:31.949510   14256 start.go:159] libmachine.API.Create for "docker-flags-753000" (driver="hyperkit")
	I0906 12:42:31.949532   14256 client.go:168] LocalClient.Create starting
	I0906 12:42:31.949562   14256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:42:31.949610   14256 main.go:141] libmachine: Decoding PEM data...
	I0906 12:42:31.949624   14256 main.go:141] libmachine: Parsing certificate...
	I0906 12:42:31.949688   14256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:42:31.949726   14256 main.go:141] libmachine: Decoding PEM data...
	I0906 12:42:31.949738   14256 main.go:141] libmachine: Parsing certificate...
	I0906 12:42:31.949752   14256 main.go:141] libmachine: Running pre-create checks...
	I0906 12:42:31.949763   14256 main.go:141] libmachine: (docker-flags-753000) Calling .PreCreateCheck
	I0906 12:42:31.949834   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:31.950022   14256 main.go:141] libmachine: (docker-flags-753000) Calling .GetConfigRaw
	I0906 12:42:31.960406   14256 main.go:141] libmachine: Creating machine...
	I0906 12:42:31.960417   14256 main.go:141] libmachine: (docker-flags-753000) Calling .Create
	I0906 12:42:31.960503   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:31.960627   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:42:31.960499   14276 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:42:31.960718   14256 main.go:141] libmachine: (docker-flags-753000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:42:32.168627   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:42:32.168546   14276 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/id_rsa...
	I0906 12:42:32.248607   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:42:32.248523   14276 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/docker-flags-753000.rawdisk...
	I0906 12:42:32.248619   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Writing magic tar header
	I0906 12:42:32.248630   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Writing SSH key tar header
	I0906 12:42:32.249234   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:42:32.249193   14276 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000 ...
	I0906 12:42:32.627549   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:32.627571   14256 main.go:141] libmachine: (docker-flags-753000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/hyperkit.pid
	I0906 12:42:32.627621   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Using UUID 8777139c-3c16-454b-9182-b9015bbb93e3
	I0906 12:42:32.652803   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Generated MAC 1e:50:c4:be:3b:66
	I0906 12:42:32.652819   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-753000
	I0906 12:42:32.652849   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8777139c-3c16-454b-9182-b9015bbb93e3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0906 12:42:32.652877   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8777139c-3c16-454b-9182-b9015bbb93e3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0906 12:42:32.652917   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8777139c-3c16-454b-9182-b9015bbb93e3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/docker-flags-753000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage,/Users/jenkins/m
inikube-integration/19576-7784/.minikube/machines/docker-flags-753000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-753000"}
	I0906 12:42:32.652953   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8777139c-3c16-454b-9182-b9015bbb93e3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/docker-flags-753000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags
-753000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-753000"
	I0906 12:42:32.652971   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:42:32.655933   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 DEBUG: hyperkit: Pid is 14277
	I0906 12:42:32.656346   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 0
	I0906 12:42:32.656361   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:32.656417   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:32.657347   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:32.657428   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:32.657443   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:32.657472   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:32.657485   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:32.657499   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:32.657508   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:32.657517   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:32.657526   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:32.657540   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:32.657550   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:32.657561   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:32.657574   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:32.657587   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:32.657612   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:32.657634   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:32.657650   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:32.657664   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:32.657673   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:32.657681   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:32.657693   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:32.657713   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:32.657732   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:32.657746   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:32.657756   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:32.657764   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:32.657773   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:32.657785   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:32.657793   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:32.657799   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:32.657809   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:32.657822   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:32.657844   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:32.657858   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:32.657867   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:32.657874   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:32.657887   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:32.657899   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:32.657912   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:32.664024   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:42:32.672054   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:42:32.673038   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:42:32.673053   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:42:32.673071   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:42:32.673081   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:42:33.052913   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:42:33.052928   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:42:33.167669   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:42:33.167688   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:42:33.167701   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:42:33.167712   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:42:33.168590   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:42:33.168615   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:42:34.657756   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 1
	I0906 12:42:34.657770   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:34.657868   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:34.658673   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:34.658741   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:34.658752   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:34.658762   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:34.658769   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:34.658776   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:34.658784   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:34.658796   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:34.658809   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:34.658821   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:34.658829   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:34.658840   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:34.658855   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:34.658866   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:34.658873   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:34.658883   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:34.658893   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:34.658901   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:34.658946   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:34.658964   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:34.658972   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:34.658980   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:34.658989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:34.658997   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:34.659022   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:34.659034   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:34.659045   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:34.659053   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:34.659061   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:34.659068   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:34.659077   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:34.659084   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:34.659090   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:34.659101   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:34.659110   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:34.659117   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:34.659126   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:34.659133   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:34.659143   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:36.659654   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 2
	I0906 12:42:36.659670   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:36.659683   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:36.660493   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:36.660506   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:36.660512   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:36.660519   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:36.660544   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:36.660560   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:36.660570   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:36.660580   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:36.660588   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:36.660596   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:36.660602   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:36.660611   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:36.660617   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:36.660623   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:36.660630   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:36.660659   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:36.660666   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:36.660683   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:36.660693   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:36.660699   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:36.660705   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:36.660712   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:36.660720   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:36.660735   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:36.660746   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:36.660775   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:36.660788   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:36.660802   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:36.660808   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:36.660815   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:36.660820   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:36.660827   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:36.660834   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:36.660841   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:36.660848   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:36.660855   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:36.660861   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:36.660872   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:36.660888   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:38.580688   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:42:38.580863   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:42:38.580872   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:42:38.600228   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:42:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:42:38.662158   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 3
	I0906 12:42:38.662187   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:38.662345   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:38.663533   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:38.663620   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:38.663639   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:38.663692   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:38.663703   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:38.663711   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:38.663735   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:38.663749   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:38.663760   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:38.663771   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:38.663802   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:38.663822   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:38.663834   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:38.663844   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:38.663855   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:38.663869   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:38.663877   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:38.663886   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:38.663895   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:38.663907   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:38.663917   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:38.663927   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:38.663937   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:38.663948   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:38.663961   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:38.663971   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:38.663982   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:38.663989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:38.664007   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:38.664024   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:38.664047   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:38.664058   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:38.664071   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:38.664083   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:38.664093   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:38.664103   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:38.664113   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:38.664121   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:38.664134   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:40.664058   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 4
	I0906 12:42:40.664075   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:40.664188   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:40.664961   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:40.665020   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:40.665035   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:40.665042   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:40.665049   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:40.665055   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:40.665070   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:40.665079   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:40.665087   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:40.665093   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:40.665116   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:40.665127   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:40.665135   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:40.665142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:40.665148   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:40.665154   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:40.665163   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:40.665173   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:40.665181   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:40.665192   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:40.665201   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:40.665208   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:40.665215   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:40.665220   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:40.665227   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:40.665233   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:40.665253   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:40.665263   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:40.665274   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:40.665282   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:40.665289   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:40.665297   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:40.665313   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:40.665327   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:40.665335   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:40.665341   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:40.665348   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:40.665356   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:40.665365   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:42.667218   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 5
	I0906 12:42:42.667231   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:42.667267   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:42.668072   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:42.668130   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:42.668145   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:42.668155   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:42.668162   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:42.668170   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:42.668178   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:42.668185   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:42.668195   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:42.668201   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:42.668207   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:42.668212   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:42.668230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:42.668240   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:42.668248   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:42.668258   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:42.668270   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:42.668278   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:42.668284   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:42.668290   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:42.668297   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:42.668319   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:42.668331   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:42.668341   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:42.668349   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:42.668356   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:42.668368   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:42.668376   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:42.668384   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:42.668391   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:42.668407   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:42.668414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:42.668421   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:42.668428   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:42.668439   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:42.668449   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:42.668458   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:42.668464   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:42.668482   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:44.670343   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 6
	I0906 12:42:44.670357   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:44.670426   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:44.671174   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:44.671244   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:44.671257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:44.671277   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:44.671288   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:44.671304   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:44.671325   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:44.671340   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:44.671354   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:44.671363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:44.671368   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:44.671380   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:44.671394   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:44.671404   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:44.671414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:44.671423   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:44.671432   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:44.671441   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:44.671449   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:44.671457   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:44.671466   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:44.671473   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:44.671480   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:44.671491   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:44.671498   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:44.671506   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:44.671520   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:44.671528   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:44.671535   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:44.671542   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:44.671556   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:44.671568   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:44.671581   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:44.671591   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:44.671598   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:44.671606   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:44.671614   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:44.671622   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:44.671631   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:46.671545   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 7
	I0906 12:42:46.671567   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:46.671612   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:46.672375   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:46.672443   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:46.672454   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:46.672461   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:46.672468   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:46.672485   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:46.672492   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:46.672506   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:46.672512   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:46.672521   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:46.672528   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:46.672536   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:46.672545   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:46.672552   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:46.672559   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:46.672574   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:46.672588   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:46.672597   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:46.672602   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:46.672629   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:46.672665   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:46.672686   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:46.672700   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:46.672708   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:46.672716   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:46.672723   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:46.672730   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:46.672742   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:46.672753   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:46.672766   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:46.672785   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:46.672805   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:46.672816   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:46.672823   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:46.672831   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:46.672838   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:46.672845   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:46.672852   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:46.672857   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:48.674170   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 8
	I0906 12:42:48.674186   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:48.674230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:48.675012   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:48.675053   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:48.675066   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:48.675086   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:48.675097   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:48.675104   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:48.675121   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:48.675128   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:48.675139   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:48.675147   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:48.675154   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:48.675161   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:48.675167   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:48.675181   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:48.675196   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:48.675208   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:48.675217   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:48.675225   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:48.675239   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:48.675250   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:48.675257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:48.675264   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:48.675271   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:48.675279   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:48.675294   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:48.675307   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:48.675316   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:48.675323   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:48.675331   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:48.675347   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:48.675353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:48.675360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:48.675368   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:48.675375   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:48.675383   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:48.675389   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:48.675395   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:48.675401   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:48.675410   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:50.675684   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 9
	I0906 12:42:50.675697   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:50.675781   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:50.676553   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:50.676625   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:50.676637   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:50.676667   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:50.676679   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:50.676690   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:50.676698   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:50.676705   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:50.676712   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:50.676721   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:50.676734   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:50.676743   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:50.676756   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:50.676769   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:50.676785   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:50.676794   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:50.676801   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:50.676808   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:50.676814   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:50.676822   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:50.676837   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:50.676850   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:50.676858   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:50.676870   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:50.676878   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:50.676885   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:50.676892   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:50.676900   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:50.676913   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:50.676921   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:50.676937   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:50.676948   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:50.676956   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:50.676965   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:50.676973   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:50.676994   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:50.677010   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:50.677022   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:50.677040   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:52.678253   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 10
	I0906 12:42:52.678269   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:52.678340   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:52.679149   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:52.679215   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:52.679225   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:52.679234   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:52.679250   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:52.679257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:52.679264   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:52.679278   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:52.679288   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:52.679295   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:52.679303   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:52.679309   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:52.679318   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:52.679325   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:52.679334   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:52.679342   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:52.679350   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:52.679365   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:52.679377   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:52.679385   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:52.679400   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:52.679408   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:52.679416   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:52.679427   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:52.679435   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:52.679451   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:52.679465   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:52.679474   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:52.679483   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:52.679490   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:52.679498   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:52.679504   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:52.679512   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:52.679519   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:52.679527   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:52.679534   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:52.679541   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:52.679549   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:52.679556   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:54.680533   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 11
	I0906 12:42:54.680548   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:54.680606   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:54.681369   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:54.681444   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:54.681455   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:54.681462   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:54.681469   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:54.681477   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:54.681486   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:54.681499   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:54.681506   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:54.681523   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:54.681533   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:54.681542   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:54.681548   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:54.681554   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:54.681561   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:54.681568   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:54.681574   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:54.681589   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:54.681598   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:54.681605   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:54.681613   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:54.681620   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:54.681626   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:54.681634   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:54.681641   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:54.681646   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:54.681654   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:54.681670   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:54.681682   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:54.681690   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:54.681697   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:54.681705   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:54.681712   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:54.681726   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:54.681738   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:54.681746   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:54.681754   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:54.681770   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:54.681779   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:56.683693   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 12
	I0906 12:42:56.683709   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:56.683753   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:56.684508   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:56.684574   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:56.684585   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:56.684593   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:56.684613   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:56.684629   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:56.684643   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:56.684652   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:56.684658   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:56.684664   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:56.684670   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:56.684678   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:56.684692   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:56.684699   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:56.684706   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:56.684714   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:56.684721   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:56.684729   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:56.684736   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:56.684744   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:56.684753   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:56.684761   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:56.684772   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:56.684783   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:56.684801   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:56.684809   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:56.684816   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:56.684825   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:56.684831   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:56.684840   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:56.684848   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:56.684855   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:56.684861   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:56.684868   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:56.684875   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:56.684880   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:56.684905   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:56.684931   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:56.684951   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:58.685900   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 13
	I0906 12:42:58.685915   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:58.685973   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:42:58.686752   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:42:58.686825   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:58.686836   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:58.686877   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:58.686890   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:58.686899   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:58.686905   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:58.686925   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:58.686937   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:58.686947   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:58.686954   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:58.686960   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:58.686971   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:58.686978   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:58.686985   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:58.686993   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:58.686998   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:58.687004   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:58.687010   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:58.687015   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:58.687021   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:58.687026   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:58.687034   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:58.687040   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:58.687045   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:58.687051   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:58.687057   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:58.687064   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:58.687071   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:58.687079   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:58.687095   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:58.687108   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:58.687116   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:58.687124   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:58.687142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:58.687155   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:58.687165   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:58.687177   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:58.687188   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:00.687623   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 14
	I0906 12:43:00.687638   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:00.687647   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:00.688442   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:00.688503   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:00.688513   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:00.688530   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:00.688543   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:00.688556   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:00.688563   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:00.688591   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:00.688603   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:00.688610   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:00.688619   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:00.688633   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:00.688647   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:00.688655   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:00.688666   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:00.688679   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:00.688701   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:00.688709   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:00.688719   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:00.688729   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:00.688737   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:00.688744   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:00.688751   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:00.688762   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:00.688770   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:00.688788   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:00.688800   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:00.688807   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:00.688814   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:00.688823   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:00.688831   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:00.688844   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:00.688858   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:00.688873   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:00.688881   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:00.688888   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:00.688894   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:00.688908   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:00.688922   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:02.690733   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 15
	I0906 12:43:02.690751   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:02.690783   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:02.691565   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:02.691636   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:02.691645   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:02.691653   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:02.691659   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:02.691666   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:02.691674   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:02.691681   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:02.691687   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:02.691694   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:02.691701   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:02.691708   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:02.691718   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:02.691727   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:02.691736   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:02.691742   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:02.691749   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:02.691760   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:02.691767   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:02.691773   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:02.691780   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:02.691786   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:02.691793   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:02.691800   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:02.691815   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:02.691831   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:02.691842   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:02.691857   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:02.691870   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:02.691877   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:02.691883   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:02.691897   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:02.691911   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:02.691932   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:02.691946   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:02.691956   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:02.691964   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:02.691971   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:02.691977   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:04.693008   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 16
	I0906 12:43:04.693028   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:04.693092   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:04.693859   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:04.693926   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:04.693939   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:04.693955   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:04.693966   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:04.693981   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:04.693995   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:04.694004   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:04.694014   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:04.694021   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:04.694028   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:04.694036   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:04.694062   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:04.694072   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:04.694080   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:04.694094   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:04.694101   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:04.694108   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:04.694117   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:04.694125   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:04.694132   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:04.694138   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:04.694143   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:04.694151   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:04.694171   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:04.694184   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:04.694194   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:04.694207   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:04.694216   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:04.694229   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:04.694237   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:04.694250   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:04.694258   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:04.694269   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:04.694277   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:04.694285   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:04.694291   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:04.694310   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:04.694322   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:06.696144   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 17
	I0906 12:43:06.696157   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:06.696198   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:06.696988   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:06.697043   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:06.697056   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:06.697074   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:06.697082   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:06.697088   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:06.697094   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:06.697101   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:06.697110   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:06.697123   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:06.697142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:06.697151   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:06.697156   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:06.697166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:06.697176   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:06.697182   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:06.697202   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:06.697213   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:06.697228   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:06.697243   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:06.697251   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:06.697259   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:06.697265   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:06.697273   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:06.697280   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:06.697288   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:06.697304   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:06.697321   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:06.697330   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:06.697336   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:06.697342   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:06.697350   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:06.697357   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:06.697364   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:06.697378   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:06.697393   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:06.697404   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:06.697414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:06.697423   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:08.699045   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 18
	I0906 12:43:08.699060   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:08.699106   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:08.699902   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:08.699962   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:08.699973   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:08.699981   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:08.699987   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:08.700001   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:08.700012   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:08.700019   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:08.700026   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:08.700032   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:08.700038   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:08.700045   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:08.700061   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:08.700074   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:08.700084   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:08.700095   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:08.700102   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:08.700110   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:08.700122   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:08.700131   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:08.700138   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:08.700146   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:08.700162   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:08.700169   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:08.700192   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:08.700205   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:08.700212   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:08.700220   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:08.700227   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:08.700234   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:08.700249   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:08.700262   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:08.700271   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:08.700277   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:08.700283   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:08.700289   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:08.700295   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:08.700309   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:08.700321   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:10.702179   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 19
	I0906 12:43:10.702195   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:10.702240   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:10.703045   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:10.703103   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:10.703115   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:10.703123   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:10.703129   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:10.703153   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:10.703166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:10.703177   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:10.703183   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:10.703191   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:10.703206   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:10.703217   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:10.703230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:10.703241   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:10.703248   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:10.703257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:10.703274   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:10.703282   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:10.703296   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:10.703307   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:10.703324   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:10.703333   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:10.703341   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:10.703359   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:10.703366   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:10.703374   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:10.703384   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:10.703391   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:10.703400   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:10.703406   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:10.703414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:10.703421   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:10.703429   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:10.703445   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:10.703458   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:10.703466   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:10.703472   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:10.703479   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:10.703487   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:12.705078   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 20
	I0906 12:43:12.705093   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:12.705104   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:12.705937   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:12.705982   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:12.705995   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:12.706017   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:12.706027   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:12.706065   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:12.706077   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:12.706087   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:12.706101   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:12.706108   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:12.706114   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:12.706122   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:12.706135   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:12.706147   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:12.706159   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:12.706167   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:12.706174   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:12.706182   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:12.706188   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:12.706196   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:12.706219   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:12.706232   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:12.706240   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:12.706251   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:12.706261   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:12.706267   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:12.706274   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:12.706287   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:12.706299   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:12.706315   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:12.706328   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:12.706336   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:12.706344   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:12.706351   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:12.706358   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:12.706374   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:12.706383   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:12.706390   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:12.706397   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:14.706380   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 21
	I0906 12:43:14.706394   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:14.706475   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:14.707285   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:14.707335   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:14.707345   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:14.707353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:14.707386   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:14.707402   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:14.707411   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:14.707418   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:14.707425   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:14.707431   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:14.707438   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:14.707452   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:14.707460   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:14.707469   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:14.707477   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:14.707488   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:14.707495   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:14.707502   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:14.707510   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:14.707517   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:14.707524   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:14.707532   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:14.707540   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:14.707547   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:14.707553   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:14.707566   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:14.707578   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:14.707594   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:14.707608   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:14.707617   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:14.707625   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:14.707632   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:14.707647   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:14.707662   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:14.707676   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:14.707691   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:14.707704   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:14.707715   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:14.707725   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:16.709329   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 22
	I0906 12:43:16.709345   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:16.709398   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:16.710209   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:16.710254   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:16.710266   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:16.710285   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:16.710305   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:16.710315   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:16.710326   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:16.710334   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:16.710341   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:16.710348   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:16.710357   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:16.710363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:16.710377   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:16.710386   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:16.710393   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:16.710400   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:16.710407   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:16.710414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:16.710430   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:16.710442   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:16.710451   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:16.710465   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:16.710472   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:16.710478   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:16.710494   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:16.710505   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:16.710514   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:16.710521   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:16.710530   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:16.710539   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:16.710546   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:16.710554   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:16.710561   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:16.710567   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:16.710574   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:16.710582   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:16.710606   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:16.710614   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:16.710621   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:18.711277   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 23
	I0906 12:43:18.711301   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:18.711340   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:18.712115   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:18.712188   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:18.712199   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:18.712208   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:18.712216   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:18.712226   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:18.712247   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:18.712259   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:18.712282   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:18.712294   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:18.712302   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:18.712311   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:18.712321   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:18.712329   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:18.712336   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:18.712345   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:18.712353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:18.712360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:18.712367   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:18.712384   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:18.712393   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:18.712399   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:18.712407   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:18.712415   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:18.712422   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:18.712436   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:18.712444   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:18.712451   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:18.712457   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:18.712463   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:18.712472   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:18.712479   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:18.712488   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:18.712494   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:18.712502   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:18.712509   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:18.712516   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:18.712525   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:18.712537   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:20.714148   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 24
	I0906 12:43:20.714175   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:20.714211   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:20.714981   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:20.715068   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:20.715078   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:20.715085   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:20.715092   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:20.715108   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:20.715119   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:20.715126   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:20.715133   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:20.715140   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:20.715153   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:20.715169   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:20.715180   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:20.715193   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:20.715202   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:20.715213   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:20.715223   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:20.715232   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:20.715240   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:20.715256   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:20.715265   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:20.715274   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:20.715281   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:20.715292   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:20.715302   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:20.715311   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:20.715318   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:20.715326   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:20.715334   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:20.715347   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:20.715359   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:20.715369   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:20.715378   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:20.715389   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:20.715397   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:20.715405   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:20.715412   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:20.715419   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:20.715427   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:22.717248   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 25
	I0906 12:43:22.717262   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:22.717315   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:22.718107   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:22.718164   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:22.718174   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:22.718183   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:22.718192   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:22.718221   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:22.718228   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:22.718236   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:22.718243   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:22.718248   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:22.718254   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:22.718260   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:22.718266   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:22.718273   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:22.718279   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:22.718286   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:22.718299   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:22.718311   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:22.718329   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:22.718340   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:22.718348   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:22.718356   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:22.718370   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:22.718382   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:22.718390   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:22.718396   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:22.718402   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:22.718409   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:22.718417   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:22.718424   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:22.718432   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:22.718446   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:22.718455   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:22.718462   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:22.718471   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:22.718483   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:22.718492   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:22.718499   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:22.718507   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:24.718805   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 26
	I0906 12:43:24.718818   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:24.718889   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:24.719689   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:24.719714   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:24.719721   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:24.719730   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:24.719739   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:24.719746   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:24.719757   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:24.719772   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:24.719785   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:24.719803   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:24.719814   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:24.719824   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:24.719830   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:24.719836   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:24.719843   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:24.719849   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:24.719856   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:24.719862   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:24.719877   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:24.719884   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:24.719903   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:24.719915   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:24.719927   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:24.719936   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:24.719946   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:24.719960   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:24.719968   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:24.719977   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:24.719985   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:24.719992   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:24.719999   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:24.720015   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:24.720039   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:24.720048   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:24.720054   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:24.720061   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:24.720068   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:24.720075   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:24.720082   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:26.721948   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 27
	I0906 12:43:26.721964   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:26.722012   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:26.722778   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:26.722848   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:26.722859   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:26.722867   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:26.722873   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:26.722879   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:26.722885   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:26.722892   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:26.722898   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:26.722913   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:26.722923   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:26.722930   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:26.722937   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:26.722945   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:26.722953   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:26.722959   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:26.722973   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:26.722982   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:26.722989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:26.722995   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:26.723008   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:26.723019   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:26.723028   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:26.723035   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:26.723043   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:26.723050   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:26.723056   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:26.723062   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:26.723073   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:26.723088   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:26.723101   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:26.723107   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:26.723115   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:26.723123   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:26.723137   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:26.723151   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:26.723159   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:26.723166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:26.723172   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:28.723723   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 28
	I0906 12:43:28.723737   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:28.723802   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:28.724582   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:28.724646   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:28.724656   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:28.724685   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:28.724695   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:28.724705   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:28.724712   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:28.724718   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:28.724724   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:28.724739   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:28.724747   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:28.724758   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:28.724768   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:28.724783   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:28.724795   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:28.724820   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:28.724831   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:28.724858   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:28.724871   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:28.724881   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:28.724891   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:28.724899   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:28.724908   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:28.724919   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:28.724927   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:28.724935   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:28.724945   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:28.724951   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:28.724959   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:28.724967   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:28.724975   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:28.724982   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:28.724989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:28.725000   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:28.725010   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:28.725019   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:28.725028   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:28.725035   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:28.725048   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:30.726264   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 29
	I0906 12:43:30.726281   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:30.726360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:30.727123   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 1e:50:c4:be:3b:66 in /var/db/dhcpd_leases ...
	I0906 12:43:30.727191   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:30.727205   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:30.727214   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:30.727224   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:30.727230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:30.727236   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:30.727242   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:30.727249   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:30.727264   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:30.727279   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:30.727289   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:30.727296   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:30.727303   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:30.727309   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:30.727315   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:30.727350   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:30.727363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:30.727371   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:30.727379   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:30.727385   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:30.727392   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:30.727408   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:30.727426   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:30.727434   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:30.727441   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:30.727448   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:30.727456   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:30.727469   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:30.727474   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:30.727483   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:30.727494   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:30.727501   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:30.727509   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:30.727515   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:30.727520   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:30.727525   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:30.727531   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:30.727537   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:32.729500   14256 client.go:171] duration metric: took 1m0.780247705s to LocalClient.Create
	I0906 12:43:34.731626   14256 start.go:128] duration metric: took 1m2.813896552s to createHost
	I0906 12:43:34.731643   14256 start.go:83] releasing machines lock for "docker-flags-753000", held for 1m2.814040217s
	W0906 12:43:34.731670   14256 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:50:c4:be:3b:66
	I0906 12:43:34.731987   14256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:43:34.732011   14256 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:43:34.740699   14256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58155
	I0906 12:43:34.741043   14256 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:43:34.741414   14256 main.go:141] libmachine: Using API Version  1
	I0906 12:43:34.741429   14256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:43:34.741644   14256 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:43:34.742018   14256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:43:34.742042   14256 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:43:34.750597   14256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58157
	I0906 12:43:34.750934   14256 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:43:34.751273   14256 main.go:141] libmachine: Using API Version  1
	I0906 12:43:34.751283   14256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:43:34.751498   14256 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:43:34.751641   14256 main.go:141] libmachine: (docker-flags-753000) Calling .GetState
	I0906 12:43:34.751736   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:34.751804   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:34.752764   14256 main.go:141] libmachine: (docker-flags-753000) Calling .DriverName
	I0906 12:43:34.773910   14256 out.go:177] * Deleting "docker-flags-753000" in hyperkit ...
	I0906 12:43:34.815977   14256 main.go:141] libmachine: (docker-flags-753000) Calling .Remove
	I0906 12:43:34.816117   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:34.816126   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:34.816203   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:34.817148   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:34.817207   14256 main.go:141] libmachine: (docker-flags-753000) DBG | waiting for graceful shutdown
	I0906 12:43:35.818298   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:35.818363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:35.819250   14256 main.go:141] libmachine: (docker-flags-753000) DBG | waiting for graceful shutdown
	I0906 12:43:36.819817   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:36.819880   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:36.821660   14256 main.go:141] libmachine: (docker-flags-753000) DBG | waiting for graceful shutdown
	I0906 12:43:37.822491   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:37.822539   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:37.823371   14256 main.go:141] libmachine: (docker-flags-753000) DBG | waiting for graceful shutdown
	I0906 12:43:38.825505   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:38.825589   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:38.826316   14256 main.go:141] libmachine: (docker-flags-753000) DBG | waiting for graceful shutdown
	I0906 12:43:39.827533   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:39.827560   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14277
	I0906 12:43:39.828498   14256 main.go:141] libmachine: (docker-flags-753000) DBG | sending sigkill
	I0906 12:43:39.828508   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0906 12:43:39.855873   14256 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:50:c4:be:3b:66
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:50:c4:be:3b:66
	I0906 12:43:39.855916   14256 start.go:729] Will try again in 5 seconds ...
	I0906 12:43:39.864764   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:43:39 WARN : hyperkit: failed to read stdout: EOF
	I0906 12:43:39.864789   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:43:39 WARN : hyperkit: failed to read stderr: EOF
	I0906 12:43:44.857480   14256 start.go:360] acquireMachinesLock for docker-flags-753000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:44:37.577423   14256 start.go:364] duration metric: took 52.720185564s to acquireMachinesLock for "docker-flags-753000"
	I0906 12:44:37.577454   14256 start.go:93] Provisioning new machine with config: &{Name:docker-flags-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:docker-flags-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:44:37.577507   14256 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:44:37.619689   14256 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:44:37.619753   14256 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:44:37.619778   14256 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:44:37.628762   14256 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58161
	I0906 12:44:37.629108   14256 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:44:37.629511   14256 main.go:141] libmachine: Using API Version  1
	I0906 12:44:37.629532   14256 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:44:37.629781   14256 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:44:37.629905   14256 main.go:141] libmachine: (docker-flags-753000) Calling .GetMachineName
	I0906 12:44:37.630011   14256 main.go:141] libmachine: (docker-flags-753000) Calling .DriverName
	I0906 12:44:37.630122   14256 start.go:159] libmachine.API.Create for "docker-flags-753000" (driver="hyperkit")
	I0906 12:44:37.630152   14256 client.go:168] LocalClient.Create starting
	I0906 12:44:37.630180   14256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:44:37.630231   14256 main.go:141] libmachine: Decoding PEM data...
	I0906 12:44:37.630242   14256 main.go:141] libmachine: Parsing certificate...
	I0906 12:44:37.630286   14256 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:44:37.630327   14256 main.go:141] libmachine: Decoding PEM data...
	I0906 12:44:37.630339   14256 main.go:141] libmachine: Parsing certificate...
	I0906 12:44:37.630352   14256 main.go:141] libmachine: Running pre-create checks...
	I0906 12:44:37.630358   14256 main.go:141] libmachine: (docker-flags-753000) Calling .PreCreateCheck
	I0906 12:44:37.630435   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:37.630465   14256 main.go:141] libmachine: (docker-flags-753000) Calling .GetConfigRaw
	I0906 12:44:37.661524   14256 main.go:141] libmachine: Creating machine...
	I0906 12:44:37.661533   14256 main.go:141] libmachine: (docker-flags-753000) Calling .Create
	I0906 12:44:37.661624   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:37.661766   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:44:37.661613   14311 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:44:37.661815   14256 main.go:141] libmachine: (docker-flags-753000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:44:38.007072   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:44:38.007010   14311 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/id_rsa...
	I0906 12:44:38.220954   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:44:38.220894   14311 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/docker-flags-753000.rawdisk...
	I0906 12:44:38.220972   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Writing magic tar header
	I0906 12:44:38.221000   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Writing SSH key tar header
	I0906 12:44:38.221463   14256 main.go:141] libmachine: (docker-flags-753000) DBG | I0906 12:44:38.221372   14311 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000 ...
	I0906 12:44:38.613104   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:38.613142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/hyperkit.pid
	I0906 12:44:38.613167   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Using UUID c67861ac-ef44-4d97-87e2-d7aed03b1d0d
	I0906 12:44:38.649087   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Generated MAC 96:26:9a:2f:e5:cc
	I0906 12:44:38.649105   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-753000
	I0906 12:44:38.649166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c67861ac-ef44-4d97-87e2-d7aed03b1d0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000bc1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0906 12:44:38.649204   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"c67861ac-ef44-4d97-87e2-d7aed03b1d0d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000bc1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0906 12:44:38.649238   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "c67861ac-ef44-4d97-87e2-d7aed03b1d0d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/docker-flags-753000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage,/Users/jenkins/m
inikube-integration/19576-7784/.minikube/machines/docker-flags-753000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-753000"}
	I0906 12:44:38.649271   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U c67861ac-ef44-4d97-87e2-d7aed03b1d0d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/docker-flags-753000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags
-753000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-753000"
	I0906 12:44:38.649310   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:44:38.652353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 DEBUG: hyperkit: Pid is 14326
	I0906 12:44:38.652836   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 0
	I0906 12:44:38.652850   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:38.652894   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:38.653898   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:38.654007   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:38.654029   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:38.654059   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:38.654091   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:38.654113   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:38.654150   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:38.654166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:38.654181   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:38.654195   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:38.654209   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:38.654222   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:38.654247   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:38.654262   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:38.654279   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:38.654291   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:38.654322   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:38.654331   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:38.654363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:38.654377   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:38.654407   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:38.654421   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:38.654434   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:38.654449   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:38.654474   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:38.654491   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:38.654503   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:38.654511   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:38.654518   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:38.654525   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:38.654549   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:38.654563   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:38.654574   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:38.654583   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:38.654589   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:38.654596   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:38.654619   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:38.654630   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:38.654649   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:38.660333   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:44:38.668718   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/docker-flags-753000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:44:38.669663   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:44:38.669693   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:44:38.669714   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:44:38.669730   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:44:39.056709   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:44:39.056724   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:44:39.171463   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:44:39.171479   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:44:39.171490   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:44:39.171503   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:44:39.172408   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:44:39.172421   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:39 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:44:40.656191   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 1
	I0906 12:44:40.656211   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:40.656307   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:40.657173   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:40.657252   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:40.657262   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:40.657274   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:40.657284   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:40.657322   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:40.657335   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:40.657346   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:40.657360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:40.657371   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:40.657384   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:40.657397   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:40.657408   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:40.657418   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:40.657428   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:40.657439   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:40.657452   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:40.657461   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:40.657469   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:40.657487   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:40.657506   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:40.657516   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:40.657526   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:40.657535   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:40.657543   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:40.657552   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:40.657571   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:40.657580   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:40.657592   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:40.657603   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:40.657613   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:40.657620   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:40.657629   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:40.657639   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:40.657650   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:40.657660   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:40.657668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:40.657680   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:40.657696   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:42.657561   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 2
	I0906 12:44:42.657579   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:42.657644   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:42.658520   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:42.658591   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:42.658603   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:42.658611   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:42.658621   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:42.658632   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:42.658643   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:42.658649   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:42.658655   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:42.658662   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:42.658668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:42.658673   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:42.658680   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:42.658685   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:42.658691   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:42.658701   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:42.658707   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:42.658716   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:42.658725   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:42.658732   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:42.658738   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:42.658760   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:42.658776   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:42.658792   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:42.658808   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:42.658817   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:42.658825   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:42.658832   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:42.658840   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:42.658869   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:42.658898   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:42.658915   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:42.658928   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:42.658943   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:42.658958   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:42.658966   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:42.658974   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:42.658981   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:42.658988   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:44.575668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:44 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:44:44.575834   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:44 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:44:44.575847   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:44 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:44:44.595700   14256 main.go:141] libmachine: (docker-flags-753000) DBG | 2024/09/06 12:44:44 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:44:44.659570   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 3
	I0906 12:44:44.659590   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:44.659700   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:44.660762   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:44.660879   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:44.660900   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:44.660911   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:44.660920   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:44.660944   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:44.660959   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:44.660974   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:44.660986   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:44.660997   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:44.661005   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:44.661015   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:44.661026   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:44.661035   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:44.661056   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:44.661067   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:44.661078   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:44.661088   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:44.661097   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:44.661107   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:44.661117   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:44.661127   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:44.661145   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:44.661164   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:44.661184   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:44.661196   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:44.661216   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:44.661235   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:44.661246   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:44.661254   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:44.661265   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:44.661276   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:44.661291   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:44.661303   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:44.661320   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:44.661340   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:44.661350   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:44.661361   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:44.661375   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:46.661851   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 4
	I0906 12:44:46.661868   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:46.661938   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:46.662736   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:46.662827   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:46.662836   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:46.662846   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:46.662858   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:46.662867   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:46.662873   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:46.662883   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:46.662896   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:46.662904   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:46.662910   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:46.662916   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:46.662922   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:46.662930   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:46.662936   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:46.662943   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:46.662949   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:46.662955   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:46.662977   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:46.662984   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:46.662999   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:46.663009   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:46.663017   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:46.663025   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:46.663033   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:46.663041   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:46.663048   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:46.663057   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:46.663066   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:46.663075   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:46.663082   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:46.663090   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:46.663099   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:46.663106   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:46.663114   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:46.663123   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:46.663137   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:46.663146   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:46.663155   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:48.665110   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 5
	I0906 12:44:48.665125   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:48.665176   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:48.666006   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:48.666079   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:48.666089   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:48.666098   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:48.666104   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:48.666110   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:48.666116   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:48.666122   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:48.666128   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:48.666135   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:48.666142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:48.666148   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:48.666154   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:48.666162   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:48.666173   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:48.666181   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:48.666204   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:48.666222   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:48.666234   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:48.666242   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:48.666250   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:48.666263   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:48.666275   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:48.666288   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:48.666298   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:48.666306   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:48.666314   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:48.666325   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:48.666334   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:48.666343   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:48.666353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:48.666367   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:48.666381   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:48.666395   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:48.666408   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:48.666416   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:48.666422   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:48.666436   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:48.666444   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:50.668302   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 6
	I0906 12:44:50.668325   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:50.668359   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:50.669135   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:50.669201   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:50.669210   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:50.669230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:50.669240   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:50.669256   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:50.669262   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:50.669270   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:50.669277   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:50.669314   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:50.669324   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:50.669334   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:50.669344   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:50.669352   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:50.669360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:50.669367   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:50.669373   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:50.669385   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:50.669398   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:50.669407   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:50.669423   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:50.669437   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:50.669449   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:50.669457   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:50.669466   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:50.669481   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:50.669490   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:50.669500   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:50.669508   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:50.669515   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:50.669523   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:50.669535   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:50.669544   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:50.669553   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:50.669561   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:50.669570   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:50.669579   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:50.669589   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:50.669595   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:52.669483   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 7
	I0906 12:44:52.669499   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:52.669589   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:52.670376   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:52.670440   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:52.670454   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:52.670467   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:52.670473   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:52.670488   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:52.670498   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:52.670511   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:52.670523   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:52.670531   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:52.670537   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:52.670544   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:52.670551   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:52.670558   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:52.670572   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:52.670582   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:52.670590   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:52.670596   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:52.670602   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:52.670610   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:52.670617   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:52.670625   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:52.670638   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:52.670649   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:52.670658   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:52.670666   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:52.670673   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:52.670680   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:52.670687   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:52.670695   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:52.670713   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:52.670736   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:52.670743   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:52.670750   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:52.670757   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:52.670763   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:52.670771   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:52.670778   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:52.670787   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:54.672682   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 8
	I0906 12:44:54.672707   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:54.672723   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:54.673548   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:54.673598   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:54.673608   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:54.673616   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:54.673623   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:54.673629   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:54.673645   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:54.673652   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:54.673679   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:54.673696   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:54.673712   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:54.673726   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:54.673735   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:54.673742   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:54.673748   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:54.673756   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:54.673770   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:54.673786   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:54.673794   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:54.673803   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:54.673810   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:54.673818   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:54.673825   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:54.673832   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:54.673839   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:54.673846   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:54.673853   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:54.673861   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:54.673868   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:54.673875   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:54.673882   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:54.673892   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:54.673899   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:54.673907   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:54.673913   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:54.673919   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:54.673926   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:54.673934   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:54.673943   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:56.674850   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 9
	I0906 12:44:56.674866   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:56.674918   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:56.675714   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:56.675768   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:56.675777   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:56.675784   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:56.675790   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:56.675797   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:56.675809   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:56.675825   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:56.675854   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:56.675862   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:56.675870   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:56.675878   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:56.675886   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:56.675893   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:56.675901   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:56.675908   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:56.675929   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:56.675940   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:56.675947   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:56.675956   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:56.675963   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:56.675971   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:56.675982   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:56.675990   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:56.676003   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:56.676013   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:56.676020   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:56.676028   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:56.676038   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:56.676046   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:56.676059   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:56.676067   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:56.676074   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:56.676082   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:56.676089   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:56.676097   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:56.676111   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:56.676130   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:56.676147   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:58.677991   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 10
	I0906 12:44:58.678007   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:58.678048   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:44:58.678812   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:44:58.678872   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:58.678884   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:58.678892   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:58.678898   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:58.678904   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:58.678909   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:58.678915   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:58.678922   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:58.678929   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:58.678936   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:58.678953   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:58.678967   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:58.678977   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:58.678983   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:58.679003   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:58.679016   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:58.679027   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:58.679036   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:58.679044   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:58.679052   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:58.679062   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:58.679084   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:58.679100   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:58.679119   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:58.679128   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:58.679134   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:58.679151   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:58.679165   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:58.679173   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:58.679181   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:58.679189   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:58.679196   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:58.679202   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:58.679209   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:58.679217   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:58.679236   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:58.679251   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:58.679261   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:00.679056   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 11
	I0906 12:45:00.679071   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:00.679134   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:00.679933   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:00.679985   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:00.679997   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:00.680007   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:00.680017   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:00.680024   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:00.680047   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:00.680056   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:00.680069   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:00.680084   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:00.680092   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:00.680097   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:00.680119   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:00.680130   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:00.680143   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:00.680153   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:00.680160   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:00.680168   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:00.680175   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:00.680182   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:00.680199   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:00.680211   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:00.680225   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:00.680239   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:00.680247   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:00.680260   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:00.680276   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:00.680285   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:00.680292   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:00.680300   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:00.680308   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:00.680315   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:00.680322   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:00.680331   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:00.680339   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:00.680345   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:00.680357   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:00.680370   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:00.680387   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:02.681571   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 12
	I0906 12:45:02.681592   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:02.681668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:02.682450   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:02.682502   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:02.682510   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:02.682519   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:02.682526   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:02.682535   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:02.682546   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:02.682553   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:02.682559   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:02.682565   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:02.682571   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:02.682580   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:02.682588   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:02.682597   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:02.682614   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:02.682627   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:02.682635   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:02.682641   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:02.682648   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:02.682663   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:02.682680   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:02.682691   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:02.682702   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:02.682717   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:02.682726   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:02.682734   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:02.682745   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:02.682752   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:02.682759   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:02.682767   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:02.682774   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:02.682780   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:02.682794   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:02.682806   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:02.682814   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:02.682822   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:02.682832   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:02.682840   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:02.682848   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:04.684575   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 13
	I0906 12:45:04.684590   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:04.684630   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:04.685434   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:04.685489   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:04.685500   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:04.685549   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:04.685565   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:04.685575   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:04.685582   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:04.685596   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:04.685612   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:04.685624   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:04.685633   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:04.685640   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:04.685658   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:04.685668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:04.685674   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:04.685681   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:04.685689   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:04.685696   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:04.685704   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:04.685713   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:04.685719   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:04.685726   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:04.685734   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:04.685741   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:04.685755   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:04.685762   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:04.685768   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:04.685775   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:04.685781   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:04.685789   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:04.685796   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:04.685803   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:04.685819   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:04.685832   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:04.685840   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:04.685848   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:04.685855   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:04.685863   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:04.685873   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:06.687737   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 14
	I0906 12:45:06.687754   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:06.687812   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:06.688599   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:06.688658   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:06.688669   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:06.688686   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:06.688693   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:06.688700   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:06.688709   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:06.688727   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:06.688737   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:06.688745   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:06.688755   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:06.688763   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:06.688770   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:06.688783   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:06.688792   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:06.688801   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:06.688809   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:06.688818   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:06.688824   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:06.688831   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:06.688839   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:06.688847   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:06.688853   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:06.688860   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:06.688868   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:06.688875   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:06.688883   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:06.688895   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:06.688906   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:06.688914   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:06.688921   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:06.688936   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:06.688949   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:06.688969   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:06.688977   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:06.688989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:06.688997   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:06.689004   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:06.689016   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:08.689209   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 15
	I0906 12:45:08.689224   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:08.689317   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:08.690088   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:08.690143   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:08.690153   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:08.690163   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:08.690170   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:08.690198   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:08.690208   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:08.690218   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:08.690236   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:08.690243   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:08.690257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:08.690269   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:08.690286   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:08.690305   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:08.690314   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:08.690323   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:08.690330   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:08.690337   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:08.690350   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:08.690360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:08.690372   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:08.690380   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:08.690388   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:08.690395   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:08.690402   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:08.690409   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:08.690428   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:08.690444   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:08.690455   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:08.690464   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:08.690473   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:08.690488   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:08.690496   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:08.690505   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:08.690512   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:08.690519   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:08.690527   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:08.690543   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:08.690555   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:10.691827   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 16
	I0906 12:45:10.691839   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:10.691899   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:10.692668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:10.692738   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:10.692747   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:10.692767   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:10.692773   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:10.692788   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:10.692803   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:10.692814   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:10.692822   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:10.692828   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:10.692837   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:10.692846   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:10.692853   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:10.692861   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:10.692867   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:10.692875   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:10.692882   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:10.692889   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:10.692897   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:10.692904   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:10.692912   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:10.692967   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:10.692995   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:10.693010   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:10.693024   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:10.693036   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:10.693054   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:10.693064   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:10.693069   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:10.693076   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:10.693081   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:10.693099   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:10.693114   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:10.693130   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:10.693142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:10.693154   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:10.693164   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:10.693171   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:10.693185   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:12.694649   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 17
	I0906 12:45:12.694664   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:12.694714   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:12.695495   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:12.695564   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:12.695576   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:12.695585   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:12.695591   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:12.695598   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:12.695625   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:12.695639   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:12.695650   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:12.695658   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:12.695666   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:12.695673   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:12.695680   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:12.695690   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:12.695707   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:12.695716   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:12.695730   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:12.695749   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:12.695766   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:12.695780   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:12.695790   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:12.695799   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:12.695806   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:12.695815   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:12.695826   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:12.695835   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:12.695844   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:12.695860   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:12.695868   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:12.695876   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:12.695883   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:12.695889   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:12.695902   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:12.695915   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:12.695929   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:12.695943   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:12.695951   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:12.695959   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:12.695974   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:14.697225   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 18
	I0906 12:45:14.697244   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:14.697304   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:14.698082   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:14.698151   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:14.698165   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:14.698172   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:14.698182   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:14.698190   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:14.698201   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:14.698208   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:14.698216   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:14.698222   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:14.698228   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:14.698233   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:14.698242   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:14.698249   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:14.698257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:14.698264   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:14.698294   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:14.698307   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:14.698326   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:14.698335   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:14.698343   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:14.698353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:14.698361   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:14.698368   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:14.698374   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:14.698380   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:14.698390   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:14.698401   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:14.698409   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:14.698419   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:14.698439   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:14.698452   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:14.698460   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:14.698468   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:14.698475   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:14.698483   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:14.698490   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:14.698497   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:14.698515   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:16.700179   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 19
	I0906 12:45:16.700195   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:16.700207   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:16.701003   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:16.701066   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:16.701079   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:16.701090   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:16.701098   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:16.701116   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:16.701127   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:16.701143   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:16.701157   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:16.701168   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:16.701176   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:16.701185   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:16.701201   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:16.701213   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:16.701221   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:16.701230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:16.701237   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:16.701246   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:16.701254   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:16.701262   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:16.701269   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:16.701277   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:16.701284   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:16.701294   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:16.701304   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:16.701312   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:16.701321   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:16.701351   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:16.701363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:16.701372   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:16.701380   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:16.701388   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:16.701395   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:16.701402   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:16.701408   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:16.701414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:16.701421   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:16.701444   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:16.701453   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:18.702024   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 20
	I0906 12:45:18.702044   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:18.702111   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:18.702873   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:18.702950   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:18.702962   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:18.702978   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:18.702989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:18.703002   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:18.703018   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:18.703026   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:18.703042   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:18.703054   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:18.703063   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:18.703072   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:18.703082   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:18.703088   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:18.703095   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:18.703103   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:18.703109   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:18.703115   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:18.703121   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:18.703128   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:18.703136   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:18.703143   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:18.703151   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:18.703158   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:18.703166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:18.703173   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:18.703181   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:18.703188   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:18.703195   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:18.703202   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:18.703209   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:18.703215   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:18.703222   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:18.703236   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:18.703248   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:18.703257   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:18.703263   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:18.703283   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:18.703297   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:20.703536   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 21
	I0906 12:45:20.703550   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:20.703609   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:20.704394   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:20.704452   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:20.704464   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:20.704472   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:20.704479   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:20.704487   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:20.704494   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:20.704518   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:20.704528   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:20.704536   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:20.704554   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:20.704565   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:20.704573   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:20.704581   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:20.704588   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:20.704596   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:20.704603   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:20.704609   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:20.704621   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:20.704639   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:20.704657   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:20.704668   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:20.704676   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:20.704684   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:20.704698   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:20.704706   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:20.704716   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:20.704725   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:20.704742   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:20.704754   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:20.704764   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:20.704774   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:20.704785   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:20.704794   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:20.704809   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:20.704822   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:20.704836   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:20.704845   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:20.704855   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:22.705238   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 22
	I0906 12:45:22.705253   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:22.705340   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:22.706129   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:22.706166   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:22.706182   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:22.706205   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:22.706215   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:22.706223   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:22.706230   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:22.706238   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:22.706247   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:22.706269   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:22.706284   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:22.706295   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:22.706305   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:22.706313   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:22.706320   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:22.706329   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:22.706335   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:22.706343   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:22.706363   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:22.706374   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:22.706386   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:22.706395   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:22.706402   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:22.706410   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:22.706419   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:22.706428   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:22.706443   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:22.706455   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:22.706463   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:22.706470   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:22.706477   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:22.706484   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:22.706491   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:22.706499   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:22.706506   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:22.706511   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:22.706517   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:22.706524   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:22.706538   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:24.706922   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 23
	I0906 12:45:24.706936   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:24.707043   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:24.707784   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:24.707849   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:24.707861   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:24.707877   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:24.707884   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:24.707892   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:24.707898   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:24.707905   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:24.707911   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:24.707918   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:24.707933   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:24.707940   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:24.707946   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:24.707953   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:24.707959   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:24.707974   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:24.707986   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:24.707994   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:24.708001   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:24.708021   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:24.708037   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:24.708052   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:24.708064   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:24.708072   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:24.708081   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:24.708088   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:24.708097   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:24.708104   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:24.708111   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:24.708118   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:24.708130   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:24.708149   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:24.708161   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:24.708171   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:24.708182   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:24.708190   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:24.708196   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:24.708203   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:24.708215   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:26.710054   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 24
	I0906 12:45:26.710075   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:26.710142   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:26.710906   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:26.710978   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:26.710989   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:26.710996   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:26.711013   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:26.711027   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:26.711035   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:26.711042   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:26.711050   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:26.711057   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:26.711064   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:26.711071   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:26.711084   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:26.711097   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:26.711107   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:26.711114   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:26.711122   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:26.711130   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:26.711138   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:26.711144   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:26.711150   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:26.711159   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:26.711167   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:26.711177   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:26.711186   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:26.711197   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:26.711207   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:26.711216   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:26.711224   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:26.711232   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:26.711238   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:26.711253   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:26.711265   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:26.711281   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:26.711293   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:26.711301   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:26.711310   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:26.711317   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:26.711326   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:28.713222   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 25
	I0906 12:45:28.713237   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:28.713303   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:28.714070   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:28.714144   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:28.714155   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:28.714163   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:28.714169   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:28.714175   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:28.714180   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:28.714186   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:28.714192   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:28.714198   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:28.714206   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:28.714224   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:28.714238   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:28.714247   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:28.714256   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:28.714264   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:28.714272   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:28.714281   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:28.714289   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:28.714296   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:28.714304   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:28.714310   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:28.714318   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:28.714325   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:28.714333   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:28.714339   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:28.714346   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:28.714353   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:28.714360   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:28.714368   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:28.714374   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:28.714388   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:28.714396   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:28.714403   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:28.714420   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:28.714431   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:28.714445   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:28.714453   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:28.714462   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:30.716330   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 26
	I0906 12:45:30.716343   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:30.716414   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:30.717192   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:30.717260   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:30.717271   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:30.717282   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:30.717293   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:30.717302   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:30.717311   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:30.717320   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:30.717329   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:30.717343   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:30.717354   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:30.717385   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:30.717399   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:30.717407   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:30.717415   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:30.717422   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:30.717437   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:30.717449   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:30.717465   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:30.717475   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:30.717481   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:30.717492   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:30.717504   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:30.717512   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:30.717521   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:30.717529   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:30.717536   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:30.717547   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:30.717554   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:30.717562   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:30.717569   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:30.717586   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:30.717598   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:30.717607   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:30.717614   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:30.717622   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:30.717629   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:30.717635   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:30.717648   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:32.717532   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 27
	I0906 12:45:32.717545   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:32.717622   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:32.718390   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:32.718494   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:32.718508   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:32.718527   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:32.718537   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:32.718548   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:32.718556   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:32.718562   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:32.718571   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:32.718587   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:32.718600   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:32.718615   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:32.718625   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:32.718635   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:32.718642   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:32.718656   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:32.718663   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:32.718674   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:32.718681   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:32.718687   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:32.718694   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:32.718699   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:32.718711   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:32.718723   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:32.718731   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:32.718737   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:32.718744   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:32.718754   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:32.718762   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:32.718771   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:32.718779   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:32.718786   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:32.718793   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:32.718801   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:32.718806   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:32.718813   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:32.718819   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:32.718826   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:32.718843   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:34.719487   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 28
	I0906 12:45:34.719500   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:34.719576   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:34.720337   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:34.720388   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:34.720397   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:34.720409   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:34.720419   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:34.720430   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:34.720439   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:34.720450   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:34.720459   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:34.720476   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:34.720491   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:34.720498   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:34.720507   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:34.720517   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:34.720525   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:34.720532   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:34.720539   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:34.720555   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:34.720567   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:34.720577   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:34.720584   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:34.720593   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:34.720602   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:34.720610   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:34.720618   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:34.720633   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:34.720646   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:34.720656   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:34.720662   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:34.720669   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:34.720677   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:34.720684   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:34.720692   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:34.720711   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:34.720723   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:34.720733   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:34.720744   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:34.720751   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:34.720759   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:36.722626   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Attempt 29
	I0906 12:45:36.723078   14256 main.go:141] libmachine: (docker-flags-753000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:45:36.723261   14256 main.go:141] libmachine: (docker-flags-753000) DBG | hyperkit pid from json: 14326
	I0906 12:45:36.723504   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Searching for 96:26:9a:2f:e5:cc in /var/db/dhcpd_leases ...
	I0906 12:45:36.723579   14256 main.go:141] libmachine: (docker-flags-753000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:45:36.723592   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:45:36.723619   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:45:36.723634   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:45:36.723686   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:45:36.723703   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:45:36.723717   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:45:36.723726   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:45:36.723737   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:45:36.723749   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:45:36.723759   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:45:36.723768   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:45:36.723784   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:45:36.723793   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:45:36.723804   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:45:36.723813   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:45:36.723821   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:45:36.723836   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:45:36.723847   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:45:36.723856   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:45:36.723866   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:45:36.723876   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:45:36.723889   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:45:36.723897   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:45:36.723907   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:45:36.723919   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:45:36.723926   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:45:36.723934   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:45:36.723948   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:45:36.723959   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:45:36.723970   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:45:36.723982   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:45:36.723991   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:45:36.724017   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:45:36.724037   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:45:36.724048   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:45:36.724060   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:45:36.724069   14256 main.go:141] libmachine: (docker-flags-753000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:45:38.724157   14256 client.go:171] duration metric: took 1m1.094331178s to LocalClient.Create
	I0906 12:45:40.726239   14256 start.go:128] duration metric: took 1m3.149069349s to createHost
	I0906 12:45:40.726266   14256 start.go:83] releasing machines lock for "docker-flags-753000", held for 1m3.149175562s
	W0906 12:45:40.726326   14256 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-753000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 96:26:9a:2f:e5:cc
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-753000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 96:26:9a:2f:e5:cc
	I0906 12:45:40.789466   14256 out.go:201] 
	W0906 12:45:40.810286   14256 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 96:26:9a:2f:e5:cc
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 96:26:9a:2f:e5:cc
	W0906 12:45:40.810299   14256 out.go:270] * 
	* 
	W0906 12:45:40.810969   14256 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:45:40.873248   14256 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-753000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-753000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-753000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (178.201395ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-753000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-753000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-753000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-753000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (168.37977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-753000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-753000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-753000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-09-06 12:45:41.424176 -0700 PDT m=+4618.532987500
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-753000 -n docker-flags-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-753000 -n docker-flags-753000: exit status 7 (82.913823ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:45:41.504662   14354 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:45:41.504686   14354 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-753000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-753000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-753000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-753000: (5.247335815s)
--- FAIL: TestDockerFlags (251.98s)

                                                
                                    
x
+
TestForceSystemdFlag (251.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-489000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-489000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.207387167s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-489000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-489000" primary control-plane node in "force-systemd-flag-489000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-489000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:40:31.582448   14223 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:40:31.582728   14223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:40:31.582735   14223 out.go:358] Setting ErrFile to fd 2...
	I0906 12:40:31.582738   14223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:40:31.582898   14223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:40:31.584437   14223 out.go:352] Setting JSON to false
	I0906 12:40:31.607976   14223 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13202,"bootTime":1725638429,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:40:31.608087   14223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:40:31.629239   14223 out.go:177] * [force-systemd-flag-489000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:40:31.672379   14223 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:40:31.672391   14223 notify.go:220] Checking for updates...
	I0906 12:40:31.714221   14223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:40:31.735228   14223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:40:31.756036   14223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:40:31.777263   14223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:40:31.798304   14223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:40:31.819509   14223 config.go:182] Loaded profile config "force-systemd-env-823000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:40:31.819632   14223 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:40:31.848272   14223 out.go:177] * Using the hyperkit driver based on user configuration
	I0906 12:40:31.890004   14223 start.go:297] selected driver: hyperkit
	I0906 12:40:31.890017   14223 start.go:901] validating driver "hyperkit" against <nil>
	I0906 12:40:31.890027   14223 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:40:31.893013   14223 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:40:31.893150   14223 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:40:31.901613   14223 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:40:31.905510   14223 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:40:31.905532   14223 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:40:31.905568   14223 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:40:31.905771   14223 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:40:31.905828   14223 cni.go:84] Creating CNI manager for ""
	I0906 12:40:31.905843   14223 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:40:31.905852   14223 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:40:31.905913   14223 start.go:340] cluster config:
	{Name:force-systemd-flag-489000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:40:31.906003   14223 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:40:31.927214   14223 out.go:177] * Starting "force-systemd-flag-489000" primary control-plane node in "force-systemd-flag-489000" cluster
	I0906 12:40:31.948267   14223 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:40:31.948304   14223 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:40:31.948325   14223 cache.go:56] Caching tarball of preloaded images
	I0906 12:40:31.948490   14223 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:40:31.948513   14223 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:40:31.948598   14223 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/force-systemd-flag-489000/config.json ...
	I0906 12:40:31.948617   14223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/force-systemd-flag-489000/config.json: {Name:mk189462c6509485aa3c0c6220bcf80f2a30b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:40:31.948942   14223 start.go:360] acquireMachinesLock for force-systemd-flag-489000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:41:28.818463   14223 start.go:364] duration metric: took 56.869951612s to acquireMachinesLock for "force-systemd-flag-489000"
	I0906 12:41:28.818500   14223 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force
-systemd-flag-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:41:28.818558   14223 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:41:28.860893   14223 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:41:28.861030   14223 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:41:28.861083   14223 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:41:28.869763   14223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58132
	I0906 12:41:28.870122   14223 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:41:28.870555   14223 main.go:141] libmachine: Using API Version  1
	I0906 12:41:28.870569   14223 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:41:28.870792   14223 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:41:28.870922   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .GetMachineName
	I0906 12:41:28.871031   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .DriverName
	I0906 12:41:28.871157   14223 start.go:159] libmachine.API.Create for "force-systemd-flag-489000" (driver="hyperkit")
	I0906 12:41:28.871179   14223 client.go:168] LocalClient.Create starting
	I0906 12:41:28.871216   14223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:41:28.871268   14223 main.go:141] libmachine: Decoding PEM data...
	I0906 12:41:28.871282   14223 main.go:141] libmachine: Parsing certificate...
	I0906 12:41:28.871332   14223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:41:28.871372   14223 main.go:141] libmachine: Decoding PEM data...
	I0906 12:41:28.871384   14223 main.go:141] libmachine: Parsing certificate...
	I0906 12:41:28.871396   14223 main.go:141] libmachine: Running pre-create checks...
	I0906 12:41:28.871404   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .PreCreateCheck
	I0906 12:41:28.871491   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:28.871695   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .GetConfigRaw
	I0906 12:41:28.882060   14223 main.go:141] libmachine: Creating machine...
	I0906 12:41:28.882070   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .Create
	I0906 12:41:28.882150   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:28.882279   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:41:28.882148   14241 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:41:28.882327   14223 main.go:141] libmachine: (force-systemd-flag-489000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:41:29.223611   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:41:29.223552   14241 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/id_rsa...
	I0906 12:41:29.311767   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:41:29.311698   14241 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/force-systemd-flag-489000.rawdisk...
	I0906 12:41:29.311788   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Writing magic tar header
	I0906 12:41:29.311815   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Writing SSH key tar header
	I0906 12:41:29.348797   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:41:29.348739   14241 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000 ...
	I0906 12:41:29.760932   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:29.760947   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/hyperkit.pid
	I0906 12:41:29.760997   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Using UUID 8df21bae-5404-4047-951d-984b2cde34f1
	I0906 12:41:29.786484   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Generated MAC 86:89:c8:0:24:c5
	I0906 12:41:29.786503   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-489000
	I0906 12:41:29.786531   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8df21bae-5404-4047-951d-984b2cde34f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:41:29.786556   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8df21bae-5404-4047-951d-984b2cde34f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:41:29.786619   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8df21bae-5404-4047-951d-984b2cde34f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/force-systemd-flag-489000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/fo
rce-systemd-flag-489000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-489000"}
	I0906 12:41:29.786650   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8df21bae-5404-4047-951d-984b2cde34f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/force-systemd-flag-489000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/bzimage,/Users/jenkins/minikube-integr
ation/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-489000"
	I0906 12:41:29.786670   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:41:29.789651   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 DEBUG: hyperkit: Pid is 14255
	I0906 12:41:29.790099   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 0
	I0906 12:41:29.790125   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:29.790227   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:29.791142   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:29.791226   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:29.791243   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:29.791259   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:29.791273   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:29.791288   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:29.791298   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:29.791327   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:29.791357   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:29.791384   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:29.791399   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:29.791409   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:29.791416   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:29.791429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:29.791481   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:29.791516   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:29.791525   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:29.791534   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:29.791545   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:29.791568   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:29.791588   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:29.791598   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:29.791612   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:29.791621   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:29.791631   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:29.791661   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:29.791679   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:29.791703   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:29.791722   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:29.791750   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:29.791768   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:29.791780   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:29.791795   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:29.791808   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:29.791824   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:29.791842   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:29.791856   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:29.791874   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:29.791893   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:29.797593   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:41:29.805782   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:41:29.806683   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:41:29.806704   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:41:29.806721   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:41:29.806734   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:29 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:41:30.189345   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:41:30.189362   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:41:30.303918   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:41:30.303939   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:41:30.303971   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:41:30.303992   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:41:30.304812   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:41:30.304822   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:30 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:41:31.793416   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 1
	I0906 12:41:31.793449   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:31.793536   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:31.794353   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:31.794403   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:31.794416   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:31.794430   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:31.794436   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:31.794443   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:31.794449   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:31.794455   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:31.794461   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:31.794467   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:31.794473   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:31.794485   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:31.794495   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:31.794508   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:31.794520   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:31.794536   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:31.794550   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:31.794561   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:31.794571   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:31.794580   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:31.794590   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:31.794598   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:31.794606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:31.794617   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:31.794628   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:31.794638   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:31.794646   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:31.794660   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:31.794672   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:31.794681   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:31.794689   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:31.794698   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:31.794707   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:31.794714   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:31.794723   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:31.794731   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:31.794738   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:31.794747   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:31.794755   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:33.795606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 2
	I0906 12:41:33.795622   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:33.795674   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:33.796505   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:33.796548   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:33.796562   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:33.796590   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:33.796606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:33.796617   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:33.796623   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:33.796632   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:33.796649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:33.796658   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:33.796666   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:33.796673   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:33.796681   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:33.796687   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:33.796694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:33.796702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:33.796710   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:33.796717   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:33.796725   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:33.796738   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:33.796753   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:33.796761   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:33.796778   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:33.796790   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:33.796837   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:33.796859   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:33.796868   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:33.796876   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:33.796884   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:33.796892   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:33.796899   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:33.796907   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:33.796927   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:33.796941   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:33.796951   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:33.796957   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:33.796965   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:33.796974   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:33.796990   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:35.730520   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:35 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:41:35.730611   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:35 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:41:35.730622   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:35 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:41:35.750560   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:41:35 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:41:35.797670   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 3
	I0906 12:41:35.797709   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:35.797854   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:35.798927   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:35.799118   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:35.799138   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:35.799151   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:35.799165   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:35.799174   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:35.799184   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:35.799192   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:35.799202   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:35.799211   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:35.799222   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:35.799230   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:35.799240   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:35.799252   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:35.799262   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:35.799276   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:35.799285   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:35.799297   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:35.799307   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:35.799315   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:35.799324   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:35.799342   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:35.799353   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:35.799389   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:35.799436   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:35.799445   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:35.799456   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:35.799469   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:35.799479   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:35.799489   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:35.799501   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:35.799522   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:35.799539   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:35.799551   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:35.799567   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:35.799579   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:35.799589   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:35.799609   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:35.799622   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:37.799983   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 4
	I0906 12:41:37.799997   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:37.800062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:37.800878   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:37.800951   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:37.800962   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:37.800980   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:37.800990   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:37.800999   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:37.801006   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:37.801027   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:37.801036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:37.801048   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:37.801060   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:37.801067   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:37.801074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:37.801080   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:37.801086   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:37.801094   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:37.801101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:37.801110   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:37.801126   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:37.801138   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:37.801146   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:37.801162   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:37.801171   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:37.801179   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:37.801188   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:37.801195   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:37.801202   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:37.801210   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:37.801217   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:37.801225   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:37.801233   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:37.801240   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:37.801247   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:37.801254   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:37.801260   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:37.801268   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:37.801276   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:37.801283   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:37.801292   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:39.802229   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 5
	I0906 12:41:39.802258   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:39.802280   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:39.803050   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:39.803119   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:39.803136   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:39.803149   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:39.803158   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:39.803165   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:39.803171   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:39.803177   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:39.803186   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:39.803201   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:39.803209   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:39.803224   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:39.803234   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:39.803243   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:39.803258   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:39.803265   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:39.803272   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:39.803278   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:39.803287   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:39.803300   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:39.803311   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:39.803328   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:39.803340   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:39.803357   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:39.803366   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:39.803374   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:39.803381   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:39.803389   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:39.803397   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:39.803404   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:39.803421   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:39.803429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:39.803437   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:39.803447   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:39.803456   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:39.803463   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:39.803470   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:39.803479   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:39.803487   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:41.805337   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 6
	I0906 12:41:41.805350   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:41.805397   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:41.806200   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:41.806234   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:41.806245   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:41.806254   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:41.806263   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:41.806270   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:41.806278   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:41.806285   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:41.806291   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:41.806296   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:41.806307   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:41.806314   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:41.806322   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:41.806331   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:41.806343   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:41.806356   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:41.806376   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:41.806385   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:41.806393   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:41.806402   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:41.806409   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:41.806415   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:41.806422   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:41.806428   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:41.806437   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:41.806443   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:41.806450   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:41.806457   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:41.806465   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:41.806473   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:41.806478   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:41.806492   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:41.806505   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:41.806513   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:41.806521   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:41.806537   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:41.806546   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:41.806553   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:41.806560   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:43.808429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 7
	I0906 12:41:43.808446   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:43.808495   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:43.809294   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:43.809332   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:43.809342   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:43.809350   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:43.809357   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:43.809372   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:43.809389   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:43.809412   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:43.809432   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:43.809442   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:43.809449   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:43.809456   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:43.809465   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:43.809473   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:43.809479   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:43.809486   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:43.809495   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:43.809502   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:43.809508   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:43.809521   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:43.809532   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:43.809541   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:43.809549   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:43.809567   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:43.809580   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:43.809605   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:43.809618   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:43.809631   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:43.809640   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:43.809656   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:43.809668   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:43.809679   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:43.809688   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:43.809696   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:43.809702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:43.809712   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:43.809720   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:43.809728   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:43.809736   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:45.811498   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 8
	I0906 12:41:45.811514   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:45.811579   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:45.812353   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:45.812429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:45.812441   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:45.812454   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:45.812466   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:45.812498   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:45.812508   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:45.812515   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:45.812525   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:45.812533   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:45.812540   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:45.812548   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:45.812558   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:45.812567   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:45.812575   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:45.812588   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:45.812596   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:45.812604   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:45.812611   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:45.812620   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:45.812639   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:45.812649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:45.812659   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:45.812667   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:45.812675   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:45.812693   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:45.812702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:45.812710   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:45.812718   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:45.812734   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:45.812746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:45.812769   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:45.812783   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:45.812791   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:45.812799   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:45.812818   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:45.812826   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:45.812834   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:45.812843   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:47.812709   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 9
	I0906 12:41:47.812723   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:47.812789   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:47.813548   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:47.813611   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:47.813621   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:47.813630   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:47.813637   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:47.813653   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:47.813665   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:47.813673   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:47.813680   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:47.813687   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:47.813694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:47.813701   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:47.813706   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:47.813715   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:47.813724   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:47.813731   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:47.813737   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:47.813747   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:47.813758   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:47.813767   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:47.813773   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:47.813781   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:47.813788   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:47.813795   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:47.813806   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:47.813820   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:47.813841   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:47.813852   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:47.813863   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:47.813872   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:47.813879   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:47.813889   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:47.813905   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:47.813917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:47.813925   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:47.813933   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:47.813941   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:47.813949   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:47.813968   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:49.815540   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 10
	I0906 12:41:49.815556   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:49.815629   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:49.816399   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:49.816460   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:49.816469   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:49.816487   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:49.816495   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:49.816503   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:49.816511   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:49.816522   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:49.816529   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:49.816536   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:49.816544   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:49.816555   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:49.816561   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:49.816568   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:49.816577   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:49.816593   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:49.816606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:49.816621   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:49.816630   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:49.816641   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:49.816649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:49.816657   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:49.816665   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:49.816672   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:49.816686   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:49.816702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:49.816714   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:49.816727   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:49.816735   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:49.816748   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:49.816756   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:49.816763   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:49.816771   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:49.816795   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:49.816809   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:49.816817   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:49.816824   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:49.816835   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:49.816848   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:51.817152   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 11
	I0906 12:41:51.817165   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:51.817210   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:51.817976   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:51.818046   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:51.818059   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:51.818079   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:51.818103   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:51.818130   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:51.818145   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:51.818157   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:51.818168   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:51.818175   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:51.818183   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:51.818190   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:51.818197   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:51.818207   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:51.818214   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:51.818229   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:51.818239   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:51.818248   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:51.818254   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:51.818262   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:51.818270   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:51.818277   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:51.818284   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:51.818300   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:51.818311   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:51.818317   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:51.818326   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:51.818334   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:51.818344   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:51.818352   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:51.818360   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:51.818372   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:51.818385   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:51.818394   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:51.818408   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:51.818416   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:51.818424   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:51.818439   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:51.818448   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:53.873790   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 12
	I0906 12:41:53.873804   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:53.873822   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:53.874604   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:53.874648   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:53.874659   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:53.874671   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:53.874678   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:53.874697   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:53.874719   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:53.874727   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:53.874746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:53.874755   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:53.874765   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:53.874771   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:53.874793   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:53.874803   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:53.874812   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:53.874821   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:53.874829   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:53.874836   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:53.874844   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:53.874852   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:53.874857   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:53.874864   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:53.874879   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:53.874887   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:53.874907   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:53.874916   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:53.874923   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:53.874932   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:53.874939   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:53.874947   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:53.874955   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:53.874963   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:53.874976   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:53.874984   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:53.874991   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:53.875001   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:53.875008   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:53.875015   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:53.875032   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:55.876961   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 13
	I0906 12:41:55.876978   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:55.877026   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:55.877775   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:55.877851   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:55.877861   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:55.877870   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:55.877881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:55.877888   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:55.877895   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:55.877902   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:55.877910   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:55.877918   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:55.877924   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:55.877931   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:55.877937   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:55.877955   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:55.877968   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:55.877979   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:55.877988   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:55.877995   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:55.878002   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:55.878008   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:55.878016   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:55.878028   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:55.878036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:55.878043   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:55.878050   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:55.878057   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:55.878066   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:55.878074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:55.878082   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:55.878089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:55.878097   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:55.878104   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:55.878112   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:55.878119   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:55.878141   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:55.878157   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:55.878165   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:55.878176   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:55.878184   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:57.878939   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 14
	I0906 12:41:57.878954   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:57.879014   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:57.879784   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:57.879828   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:57.879836   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:57.879851   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:57.879863   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:57.879871   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:57.879891   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:57.879897   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:57.879904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:57.879911   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:57.879917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:57.879926   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:57.879938   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:57.879947   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:57.879966   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:57.879979   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:57.879994   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:57.880003   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:57.880011   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:57.880019   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:57.880034   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:57.880047   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:57.880056   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:57.880062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:57.880069   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:57.880078   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:57.880090   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:57.880100   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:57.880107   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:57.880114   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:57.880121   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:57.880129   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:57.880140   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:57.880151   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:57.880160   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:57.880167   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:57.880180   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:57.880192   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:57.880203   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:59.882047   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 15
	I0906 12:41:59.882060   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:59.882111   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:41:59.882896   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:41:59.882961   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:59.882972   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:59.882980   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:59.882988   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:59.883000   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:59.883009   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:59.883021   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:59.883029   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:59.883036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:59.883043   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:59.883055   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:59.883062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:59.883071   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:59.883077   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:59.883086   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:59.883093   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:59.883099   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:59.883106   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:59.883114   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:59.883129   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:59.883142   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:59.883151   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:59.883160   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:59.883167   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:59.883174   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:59.883189   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:59.883201   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:59.883211   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:59.883220   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:59.883234   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:59.883248   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:59.883262   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:59.883277   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:59.883287   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:59.883293   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:59.883312   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:59.883326   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:59.883336   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:01.883361   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 16
	I0906 12:42:01.883379   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:01.883487   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:01.884238   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:01.884302   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:01.884312   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:01.884321   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:01.884327   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:01.884356   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:01.884369   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:01.884377   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:01.884383   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:01.884392   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:01.884404   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:01.884414   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:01.884445   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:01.884465   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:01.884478   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:01.884492   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:01.884504   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:01.884530   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:01.884543   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:01.884555   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:01.884564   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:01.884587   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:01.884598   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:01.884607   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:01.884617   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:01.884633   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:01.884642   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:01.884649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:01.884657   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:01.884673   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:01.884685   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:01.884694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:01.884703   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:01.884711   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:01.884720   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:01.884727   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:01.884736   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:01.884750   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:01.884759   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:03.886467   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 17
	I0906 12:42:03.886485   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:03.886526   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:03.887297   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:03.887362   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:03.887389   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:03.887399   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:03.887413   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:03.887422   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:03.887429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:03.887438   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:03.887445   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:03.887451   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:03.887466   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:03.887476   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:03.887485   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:03.887492   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:03.887500   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:03.887507   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:03.887515   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:03.887523   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:03.887536   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:03.887544   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:03.887552   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:03.887560   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:03.887582   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:03.887592   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:03.887601   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:03.887615   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:03.887622   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:03.887641   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:03.887655   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:03.887663   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:03.887672   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:03.887684   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:03.887692   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:03.887707   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:03.887722   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:03.887730   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:03.887739   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:03.887746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:03.887755   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:05.887735   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 18
	I0906 12:42:05.887750   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:05.887759   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:05.888536   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:05.888597   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:05.888608   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:05.888626   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:05.888637   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:05.888646   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:05.888656   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:05.888664   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:05.888671   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:05.888678   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:05.888689   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:05.888697   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:05.888712   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:05.888720   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:05.888728   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:05.888738   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:05.888746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:05.888753   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:05.888761   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:05.888768   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:05.888776   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:05.888784   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:05.888807   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:05.888816   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:05.888823   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:05.888831   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:05.888839   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:05.888844   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:05.888851   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:05.888860   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:05.888868   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:05.888877   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:05.888883   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:05.888891   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:05.888899   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:05.888911   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:05.888928   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:05.888941   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:05.888957   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:07.889670   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 19
	I0906 12:42:07.889684   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:07.889751   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:07.890533   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:07.890612   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:07.890624   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:07.890639   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:07.890648   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:07.890658   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:07.890666   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:07.890683   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:07.890693   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:07.890701   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:07.890708   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:07.890715   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:07.890723   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:07.890731   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:07.890739   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:07.890753   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:07.890767   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:07.890785   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:07.890796   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:07.890806   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:07.890813   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:07.890826   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:07.890839   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:07.890848   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:07.890856   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:07.890864   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:07.890872   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:07.890888   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:07.890900   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:07.890909   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:07.890915   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:07.890927   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:07.890949   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:07.890958   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:07.890969   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:07.890977   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:07.890985   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:07.890992   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:07.891000   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:09.891958   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 20
	I0906 12:42:09.891988   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:09.892056   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:09.892826   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:09.892877   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:09.892889   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:09.892898   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:09.892904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:09.892924   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:09.892936   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:09.892948   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:09.892956   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:09.892964   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:09.892979   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:09.892993   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:09.893002   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:09.893008   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:09.893015   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:09.893023   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:09.893030   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:09.893038   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:09.893045   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:09.893052   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:09.893065   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:09.893074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:09.893081   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:09.893089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:09.893102   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:09.893110   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:09.893121   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:09.893131   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:09.893143   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:09.893152   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:09.893160   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:09.893168   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:09.893175   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:09.893183   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:09.893197   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:09.893210   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:09.893219   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:09.893227   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:09.893235   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:11.895137   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 21
	I0906 12:42:11.895151   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:11.895187   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:11.896117   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:11.896186   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:11.896195   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:11.896204   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:11.896210   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:11.896217   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:11.896223   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:11.896230   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:11.896239   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:11.896267   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:11.896280   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:11.896288   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:11.896294   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:11.896312   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:11.896324   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:11.896334   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:11.896342   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:11.896349   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:11.896358   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:11.896374   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:11.896387   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:11.896395   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:11.896404   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:11.896411   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:11.896419   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:11.896426   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:11.896442   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:11.896452   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:11.896460   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:11.896475   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:11.896487   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:11.896509   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:11.896523   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:11.896539   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:11.896553   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:11.896569   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:11.896578   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:11.896589   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:11.896599   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:13.897582   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 22
	I0906 12:42:13.897595   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:13.897652   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:13.898444   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:13.898515   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:13.898528   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:13.898546   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:13.898558   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:13.898567   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:13.898574   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:13.898581   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:13.898588   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:13.898605   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:13.898616   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:13.898633   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:13.898642   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:13.898650   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:13.898657   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:13.898672   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:13.898681   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:13.898688   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:13.898697   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:13.898705   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:13.898712   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:13.898719   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:13.898725   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:13.898732   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:13.898739   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:13.898746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:13.898753   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:13.898760   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:13.898766   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:13.898778   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:13.898793   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:13.898807   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:13.898820   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:13.898832   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:13.898838   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:13.898852   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:13.898877   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:13.898884   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:13.898893   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:15.899329   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 23
	I0906 12:42:15.899345   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:15.899396   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:15.900190   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:15.900233   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:15.900252   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:15.900262   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:15.900269   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:15.900276   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:15.900282   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:15.900290   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:15.900297   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:15.900305   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:15.900310   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:15.900335   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:15.900348   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:15.900365   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:15.900378   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:15.900386   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:15.900392   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:15.900399   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:15.900406   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:15.900415   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:15.900422   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:15.900429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:15.900435   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:15.900442   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:15.900450   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:15.900457   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:15.900464   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:15.900471   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:15.900476   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:15.900494   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:15.900507   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:15.900515   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:15.900527   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:15.900535   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:15.900543   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:15.900551   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:15.900561   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:15.900572   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:15.900580   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:17.900922   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 24
	I0906 12:42:17.900935   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:17.901022   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:17.901789   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:17.901832   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:17.901852   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:17.901868   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:17.901897   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:17.901921   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:17.901929   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:17.901947   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:17.901953   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:17.901966   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:17.901976   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:17.901987   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:17.901996   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:17.902003   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:17.902011   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:17.902020   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:17.902028   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:17.902036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:17.902044   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:17.902051   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:17.902059   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:17.902066   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:17.902074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:17.902081   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:17.902089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:17.902096   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:17.902103   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:17.902110   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:17.902126   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:17.902147   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:17.902160   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:17.902168   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:17.902177   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:17.902192   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:17.902205   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:17.902214   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:17.902220   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:17.902234   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:17.902249   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:19.902875   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 25
	I0906 12:42:19.902889   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:19.902945   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:19.903748   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:19.903807   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:19.903816   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:19.903825   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:19.903834   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:19.903847   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:19.903860   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:19.903868   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:19.903875   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:19.903881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:19.903888   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:19.903895   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:19.903902   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:19.903909   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:19.903914   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:19.903926   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:19.903933   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:19.903949   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:19.903957   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:19.903964   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:19.903972   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:19.903984   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:19.903995   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:19.904003   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:19.904011   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:19.904023   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:19.904043   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:19.904082   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:19.904091   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:19.904101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:19.904114   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:19.904122   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:19.904131   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:19.904138   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:19.904145   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:19.904151   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:19.904160   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:19.904169   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:19.904179   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:21.905942   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 26
	I0906 12:42:21.905955   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:21.906015   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:21.906794   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:21.906846   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:21.906858   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:21.906867   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:21.906875   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:21.906887   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:21.906898   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:21.906906   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:21.906914   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:21.906931   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:21.906944   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:21.906952   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:21.906958   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:21.906974   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:21.906984   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:21.906993   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:21.907005   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:21.907014   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:21.907022   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:21.907037   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:21.907049   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:21.907064   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:21.907074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:21.907089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:21.907099   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:21.907109   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:21.907118   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:21.907126   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:21.907132   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:21.907139   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:21.907150   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:21.907157   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:21.907165   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:21.907172   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:21.907180   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:21.907187   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:21.907196   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:21.907203   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:21.907209   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:23.908069   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 27
	I0906 12:42:23.908085   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:23.908161   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:23.908954   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:23.909007   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:23.909018   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:23.909025   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:23.909034   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:23.909044   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:23.909053   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:23.909062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:23.909089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:23.909103   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:23.909111   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:23.909120   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:23.909137   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:23.909145   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:23.909153   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:23.909179   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:23.909196   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:23.909207   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:23.909224   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:23.909238   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:23.909246   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:23.909254   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:23.909262   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:23.909278   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:23.909290   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:23.909301   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:23.909314   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:23.909323   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:23.909338   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:23.909347   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:23.909367   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:23.909378   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:23.909393   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:23.909406   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:23.909428   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:23.909438   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:23.909447   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:23.909456   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:23.909471   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:25.911196   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 28
	I0906 12:42:25.911213   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:25.911278   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:25.912051   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:25.912115   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:25.912125   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:25.912134   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:25.912161   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:25.912172   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:25.912184   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:25.912191   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:25.912198   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:25.912205   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:25.912211   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:25.912234   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:25.912256   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:25.912272   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:25.912285   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:25.912293   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:25.912299   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:25.912306   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:25.912315   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:25.912322   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:25.912330   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:25.912341   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:25.912349   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:25.912357   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:25.912364   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:25.912375   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:25.912382   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:25.912389   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:25.912395   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:25.912411   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:25.912430   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:25.912446   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:25.912458   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:25.912467   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:25.912473   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:25.912480   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:25.912488   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:25.912497   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:25.912504   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:27.912477   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 29
	I0906 12:42:27.912499   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:27.912578   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:27.913357   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 86:89:c8:0:24:c5 in /var/db/dhcpd_leases ...
	I0906 12:42:27.913426   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:42:27.913436   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:42:27.913445   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:42:27.913451   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:42:27.913457   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:42:27.913463   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:42:27.913479   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:42:27.913491   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:42:27.913499   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:42:27.913506   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:42:27.913528   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:42:27.913540   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:42:27.913550   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:42:27.913558   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:42:27.913566   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:42:27.913573   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:42:27.913581   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:42:27.913589   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:42:27.913597   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:42:27.913612   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:42:27.913624   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:42:27.913633   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:42:27.913642   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:42:27.913650   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:42:27.913658   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:42:27.913665   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:42:27.913672   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:42:27.913679   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:42:27.913687   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:42:27.913694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:42:27.913700   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:42:27.913717   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:42:27.913724   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:42:27.913730   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:42:27.913738   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:42:27.913747   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:42:27.913756   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:42:27.913765   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:42:29.915681   14223 client.go:171] duration metric: took 1m0.990045307s to LocalClient.Create
	I0906 12:42:31.917784   14223 start.go:128] duration metric: took 1m3.044773014s to createHost
	I0906 12:42:31.917805   14223 start.go:83] releasing machines lock for "force-systemd-flag-489000", held for 1m3.04489198s
	W0906 12:42:31.917821   14223 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:89:c8:0:24:c5
	I0906 12:42:31.918180   14223 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:42:31.918208   14223 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:42:31.926787   14223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58149
	I0906 12:42:31.927168   14223 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:42:31.927577   14223 main.go:141] libmachine: Using API Version  1
	I0906 12:42:31.927599   14223 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:42:31.927812   14223 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:42:31.928164   14223 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:42:31.928187   14223 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:42:31.936587   14223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58151
	I0906 12:42:31.936916   14223 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:42:31.937316   14223 main.go:141] libmachine: Using API Version  1
	I0906 12:42:31.937338   14223 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:42:31.937562   14223 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:42:31.937679   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .GetState
	I0906 12:42:31.937774   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:31.937831   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:31.938799   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .DriverName
	I0906 12:42:31.960031   14223 out.go:177] * Deleting "force-systemd-flag-489000" in hyperkit ...
	I0906 12:42:31.981205   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .Remove
	I0906 12:42:31.981343   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:31.981356   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:31.981421   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:31.982364   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:31.982436   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | waiting for graceful shutdown
	I0906 12:42:32.984484   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:32.984571   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:32.985493   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | waiting for graceful shutdown
	I0906 12:42:33.985651   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:33.985744   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:33.987365   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | waiting for graceful shutdown
	I0906 12:42:34.987531   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:34.987609   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:34.988219   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | waiting for graceful shutdown
	I0906 12:42:35.989013   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:35.989139   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:35.989809   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | waiting for graceful shutdown
	I0906 12:42:36.991904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:36.991996   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14255
	I0906 12:42:36.993049   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | sending sigkill
	I0906 12:42:36.993058   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:42:37.008047   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:42:37 WARN : hyperkit: failed to read stdout: EOF
	I0906 12:42:37.008065   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:42:37 WARN : hyperkit: failed to read stderr: EOF
	W0906 12:42:37.024600   14223 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:89:c8:0:24:c5
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 86:89:c8:0:24:c5
	I0906 12:42:37.024616   14223 start.go:729] Will try again in 5 seconds ...
	I0906 12:42:42.026392   14223 start.go:360] acquireMachinesLock for force-systemd-flag-489000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:43:34.731712   14223 start.go:364] duration metric: took 52.7055515s to acquireMachinesLock for "force-systemd-flag-489000"
	I0906 12:43:34.731753   14223 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-489000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force
-systemd-flag-489000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:43:34.731825   14223 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:43:34.753247   14223 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:43:34.753332   14223 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:43:34.753362   14223 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:43:34.761885   14223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58159
	I0906 12:43:34.762198   14223 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:43:34.762540   14223 main.go:141] libmachine: Using API Version  1
	I0906 12:43:34.762552   14223 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:43:34.762760   14223 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:43:34.762861   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .GetMachineName
	I0906 12:43:34.762956   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .DriverName
	I0906 12:43:34.763052   14223 start.go:159] libmachine.API.Create for "force-systemd-flag-489000" (driver="hyperkit")
	I0906 12:43:34.763078   14223 client.go:168] LocalClient.Create starting
	I0906 12:43:34.763110   14223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:43:34.763158   14223 main.go:141] libmachine: Decoding PEM data...
	I0906 12:43:34.763178   14223 main.go:141] libmachine: Parsing certificate...
	I0906 12:43:34.763220   14223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:43:34.763262   14223 main.go:141] libmachine: Decoding PEM data...
	I0906 12:43:34.763273   14223 main.go:141] libmachine: Parsing certificate...
	I0906 12:43:34.763286   14223 main.go:141] libmachine: Running pre-create checks...
	I0906 12:43:34.763292   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .PreCreateCheck
	I0906 12:43:34.763364   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:34.763386   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .GetConfigRaw
	I0906 12:43:34.795156   14223 main.go:141] libmachine: Creating machine...
	I0906 12:43:34.795166   14223 main.go:141] libmachine: (force-systemd-flag-489000) Calling .Create
	I0906 12:43:34.795258   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:34.795383   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:43:34.795250   14293 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:43:34.795454   14223 main.go:141] libmachine: (force-systemd-flag-489000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:43:35.029745   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:43:35.029651   14293 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/id_rsa...
	I0906 12:43:35.090429   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:43:35.090359   14293 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/force-systemd-flag-489000.rawdisk...
	I0906 12:43:35.090439   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Writing magic tar header
	I0906 12:43:35.090452   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Writing SSH key tar header
	I0906 12:43:35.091084   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | I0906 12:43:35.091047   14293 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000 ...
	I0906 12:43:35.472204   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:35.472223   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/hyperkit.pid
	I0906 12:43:35.472278   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Using UUID 0d5b7524-242e-4267-bf32-0b7184c6b216
	I0906 12:43:35.497550   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Generated MAC 1e:c4:1d:e1:8f:58
	I0906 12:43:35.497566   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-489000
	I0906 12:43:35.497595   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0d5b7524-242e-4267-bf32-0b7184c6b216", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:43:35.497625   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0d5b7524-242e-4267-bf32-0b7184c6b216", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:43:35.497670   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0d5b7524-242e-4267-bf32-0b7184c6b216", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/force-systemd-flag-489000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/fo
rce-systemd-flag-489000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-489000"}
	I0906 12:43:35.497711   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0d5b7524-242e-4267-bf32-0b7184c6b216 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/force-systemd-flag-489000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/bzimage,/Users/jenkins/minikube-integr
ation/19576-7784/.minikube/machines/force-systemd-flag-489000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-489000"
	I0906 12:43:35.497785   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:43:35.500670   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 DEBUG: hyperkit: Pid is 14294
	I0906 12:43:35.501093   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 0
	I0906 12:43:35.501113   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:35.501186   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:35.503304   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:35.503387   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:35.503398   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:35.503415   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:35.503422   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:35.503432   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:35.503444   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:35.503455   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:35.503466   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:35.503497   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:35.503516   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:35.503532   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:35.503539   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:35.503548   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:35.503563   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:35.503579   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:35.503594   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:35.503614   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:35.503641   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:35.503656   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:35.503671   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:35.503684   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:35.503695   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:35.503714   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:35.503730   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:35.503739   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:35.503748   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:35.503757   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:35.503775   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:35.503786   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:35.503797   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:35.503804   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:35.503811   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:35.503820   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:35.503828   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:35.503833   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:35.503851   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:35.503871   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:35.503887   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:35.508787   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:43:35.517037   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-flag-489000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:43:35.517881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:43:35.517903   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:43:35.517916   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:43:35.517928   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:43:35.896871   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:43:35.896887   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:35 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:43:36.011484   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:43:36.011504   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:43:36.011517   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:43:36.011538   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:43:36.012398   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:43:36.012410   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:43:37.503895   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 1
	I0906 12:43:37.503912   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:37.503983   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:37.504831   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:37.504900   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:37.504917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:37.504952   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:37.504961   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:37.504969   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:37.504982   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:37.504992   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:37.504999   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:37.505013   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:37.505020   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:37.505036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:37.505048   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:37.505057   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:37.505066   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:37.505074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:37.505082   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:37.505101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:37.505111   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:37.505121   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:37.505129   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:37.505137   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:37.505146   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:37.505153   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:37.505159   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:37.505179   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:37.505192   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:37.505206   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:37.505215   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:37.505223   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:37.505230   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:37.505247   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:37.505259   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:37.505268   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:37.505277   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:37.505286   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:37.505300   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:37.505309   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:37.505317   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:39.505472   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 2
	I0906 12:43:39.505486   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:39.505539   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:39.506331   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:39.506392   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:39.506404   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:39.506417   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:39.506425   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:39.506450   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:39.506463   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:39.506472   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:39.506480   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:39.506490   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:39.506503   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:39.506517   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:39.506525   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:39.506543   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:39.506553   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:39.506561   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:39.506569   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:39.506579   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:39.506588   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:39.506599   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:39.506607   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:39.506624   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:39.506636   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:39.506647   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:39.506656   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:39.506673   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:39.506685   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:39.506697   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:39.506706   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:39.506716   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:39.506742   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:39.506751   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:39.506758   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:39.506765   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:39.506774   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:39.506792   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:39.506802   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:39.506818   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:39.506832   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:41.457420   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:41 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:43:41.457569   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:41 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:43:41.457580   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:41 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:43:41.477853   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | 2024/09/06 12:43:41 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:43:41.508671   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 3
	I0906 12:43:41.508693   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:41.508814   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:41.509888   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:41.510010   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:41.510026   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:41.510040   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:41.510056   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:41.510070   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:41.510079   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:41.510088   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:41.510098   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:41.510117   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:41.510127   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:41.510145   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:41.510171   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:41.510195   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:41.510210   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:41.510227   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:41.510238   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:41.510248   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:41.510261   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:41.510270   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:41.510281   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:41.510290   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:41.510301   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:41.510323   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:41.510339   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:41.510351   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:41.510362   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:41.510373   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:41.510388   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:41.510399   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:41.510409   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:41.510420   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:41.510430   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:41.510452   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:41.510473   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:41.510484   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:41.510497   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:41.510512   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:41.510523   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:43.511459   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 4
	I0906 12:43:43.511477   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:43.511564   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:43.512353   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:43.512417   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:43.512431   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:43.512441   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:43.512454   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:43.512461   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:43.512466   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:43.512475   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:43.512482   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:43.512489   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:43.512495   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:43.512501   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:43.512510   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:43.512517   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:43.512523   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:43.512531   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:43.512538   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:43.512545   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:43.512552   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:43.512558   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:43.512565   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:43.512572   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:43.512579   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:43.512586   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:43.512594   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:43.512618   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:43.512630   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:43.512649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:43.512657   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:43.512665   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:43.512674   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:43.512681   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:43.512689   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:43.512696   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:43.512706   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:43.512723   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:43.512737   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:43.512747   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:43.512755   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:45.514611   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 5
	I0906 12:43:45.514632   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:45.514648   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:45.515439   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:45.515515   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:45.515531   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:45.515540   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:45.515547   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:45.515562   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:45.515581   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:45.515594   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:45.515602   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:45.515610   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:45.515621   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:45.515627   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:45.515634   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:45.515649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:45.515667   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:45.515677   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:45.515686   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:45.515693   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:45.515702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:45.515709   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:45.515718   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:45.515726   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:45.515736   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:45.515744   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:45.515755   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:45.515771   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:45.515779   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:45.515787   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:45.515795   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:45.515808   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:45.515816   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:45.515823   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:45.515833   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:45.515840   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:45.515848   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:45.515856   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:45.515864   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:45.515873   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:45.515881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:47.516706   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 6
	I0906 12:43:47.516720   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:47.516781   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:47.517595   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:47.517668   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:47.517683   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:47.517705   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:47.517712   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:47.517720   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:47.517732   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:47.517752   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:47.517766   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:47.517782   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:47.517793   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:47.517803   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:47.517811   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:47.517827   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:47.517840   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:47.517849   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:47.517857   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:47.517864   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:47.517876   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:47.517887   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:47.517896   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:47.517904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:47.517910   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:47.517917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:47.517924   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:47.517938   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:47.517946   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:47.517954   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:47.517962   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:47.517970   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:47.517979   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:47.517989   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:47.517998   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:47.518006   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:47.518018   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:47.518029   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:47.518038   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:47.518046   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:47.518055   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:49.518733   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 7
	I0906 12:43:49.518750   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:49.518846   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:49.519643   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:49.519687   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:49.519697   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:49.519705   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:49.519713   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:49.519734   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:49.519746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:49.519755   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:49.519767   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:49.519781   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:49.519789   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:49.519795   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:49.519803   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:49.519811   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:49.519832   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:49.519846   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:49.519854   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:49.519860   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:49.519871   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:49.519878   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:49.519886   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:49.519904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:49.519926   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:49.519936   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:49.519949   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:49.519958   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:49.519965   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:49.519974   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:49.519995   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:49.520008   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:49.520021   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:49.520030   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:49.520041   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:49.520050   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:49.520057   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:49.520068   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:49.520076   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:49.520084   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:49.520093   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:51.521896   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 8
	I0906 12:43:51.521913   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:51.521954   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:51.522745   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:51.522813   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:51.522823   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:51.522833   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:51.522839   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:51.522861   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:51.522875   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:51.522884   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:51.522890   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:51.522897   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:51.522904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:51.522911   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:51.522929   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:51.522942   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:51.522953   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:51.522962   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:51.522969   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:51.522977   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:51.522985   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:51.522993   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:51.523001   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:51.523009   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:51.523016   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:51.523024   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:51.523031   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:51.523037   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:51.523044   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:51.523052   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:51.523060   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:51.523066   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:51.523076   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:51.523084   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:51.523091   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:51.523099   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:51.523106   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:51.523114   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:51.523121   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:51.523127   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:51.523136   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:53.523405   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 9
	I0906 12:43:53.523421   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:53.523466   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:53.524227   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:53.524302   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:53.524312   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:53.524341   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:53.524349   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:53.524359   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:53.524366   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:53.524372   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:53.524379   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:53.524392   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:53.524399   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:53.524405   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:53.524411   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:53.524420   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:53.524428   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:53.524437   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:53.524447   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:53.524458   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:53.524470   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:53.524486   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:53.524494   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:53.524502   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:53.524509   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:53.524517   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:53.524523   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:53.524529   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:53.524545   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:53.524560   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:53.524568   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:53.524576   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:53.524585   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:53.524593   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:53.524600   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:53.524606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:53.524619   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:53.524626   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:53.524635   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:53.524659   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:53.524673   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:55.526101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 10
	I0906 12:43:55.526117   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:55.526186   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:55.526960   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:55.527040   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:55.527052   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:55.527062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:55.527071   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:55.527082   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:55.527093   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:55.527114   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:55.527131   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:55.527152   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:55.527161   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:55.527168   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:55.527175   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:55.527183   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:55.527192   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:55.527199   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:55.527208   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:55.527214   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:55.527223   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:55.527230   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:55.527238   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:55.527245   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:55.527251   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:55.527257   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:55.527264   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:55.527271   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:55.527299   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:55.527317   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:55.527326   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:55.527336   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:55.527345   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:55.527352   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:55.527360   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:55.527371   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:55.527379   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:55.527386   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:55.527394   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:55.527401   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:55.527409   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:57.529264   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 11
	I0906 12:43:57.529276   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:57.529344   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:57.530448   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:57.530501   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:57.530518   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:57.530526   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:57.530532   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:57.530550   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:57.530559   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:57.530566   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:57.530577   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:57.530584   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:57.530591   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:57.530602   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:57.530615   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:57.530622   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:57.530631   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:57.530639   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:57.530647   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:57.530666   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:57.530674   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:57.530689   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:57.530701   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:57.530709   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:57.530717   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:57.530724   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:57.530739   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:57.530754   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:57.530767   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:57.530777   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:57.530786   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:57.530793   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:57.530802   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:57.530817   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:57.530829   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:57.530842   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:57.530852   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:57.530860   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:57.530866   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:57.530874   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:57.530881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:43:59.531655   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 12
	I0906 12:43:59.531669   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:43:59.531720   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:43:59.532548   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:43:59.532560   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:43:59.532580   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:43:59.532587   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:43:59.532596   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:43:59.532604   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:43:59.532611   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:43:59.532619   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:43:59.532629   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:43:59.532636   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:43:59.532643   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:43:59.532654   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:43:59.532661   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:43:59.532668   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:43:59.532678   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:43:59.532686   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:43:59.532694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:43:59.532701   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:43:59.532709   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:43:59.532715   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:43:59.532725   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:43:59.532733   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:43:59.532742   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:43:59.532749   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:43:59.532757   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:43:59.532765   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:43:59.532773   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:43:59.532780   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:43:59.532787   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:43:59.532803   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:43:59.532816   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:43:59.532824   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:43:59.532833   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:43:59.532840   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:43:59.532848   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:43:59.532855   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:43:59.532863   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:43:59.532871   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:43:59.532882   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:01.532848   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 13
	I0906 12:44:01.532862   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:01.532923   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:01.533738   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:01.533809   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:01.533820   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:01.533828   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:01.533835   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:01.533848   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:01.533862   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:01.533879   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:01.533888   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:01.533895   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:01.533902   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:01.533909   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:01.533916   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:01.533923   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:01.533931   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:01.533947   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:01.533959   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:01.533975   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:01.533984   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:01.534000   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:01.534012   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:01.534021   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:01.534027   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:01.534048   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:01.534062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:01.534072   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:01.534081   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:01.534088   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:01.534095   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:01.534108   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:01.534124   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:01.534140   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:01.534157   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:01.534172   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:01.534183   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:01.534191   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:01.534198   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:01.534204   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:01.534218   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:03.534869   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 14
	I0906 12:44:03.534884   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:03.534982   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:03.535787   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:03.535857   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:03.535868   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:03.535884   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:03.535899   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:03.535908   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:03.535918   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:03.535929   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:03.535937   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:03.535946   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:03.535965   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:03.535974   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:03.535981   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:03.535989   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:03.535998   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:03.536006   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:03.536014   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:03.536021   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:03.536028   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:03.536036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:03.536042   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:03.536049   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:03.536062   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:03.536074   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:03.536096   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:03.536104   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:03.536113   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:03.536121   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:03.536135   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:03.536149   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:03.536158   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:03.536166   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:03.536174   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:03.536180   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:03.536186   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:03.536193   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:03.536200   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:03.536208   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:03.536215   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:05.538018   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 15
	I0906 12:44:05.538034   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:05.538090   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:05.538904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:05.538961   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:05.538973   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:05.538982   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:05.538989   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:05.539001   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:05.539013   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:05.539020   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:05.539027   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:05.539034   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:05.539042   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:05.539048   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:05.539054   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:05.539061   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:05.539069   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:05.539077   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:05.539093   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:05.539108   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:05.539122   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:05.539131   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:05.539139   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:05.539146   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:05.539158   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:05.539172   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:05.539187   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:05.539201   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:05.539209   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:05.539221   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:05.539231   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:05.539238   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:05.539244   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:05.539253   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:05.539266   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:05.539277   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:05.539292   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:05.539299   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:05.539309   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:05.539317   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:05.539325   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:07.541203   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 16
	I0906 12:44:07.541218   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:07.541259   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:07.542051   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:07.542110   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:07.542118   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:07.542127   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:07.542133   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:07.542153   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:07.542163   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:07.542170   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:07.542177   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:07.542185   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:07.542191   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:07.542200   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:07.542212   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:07.542221   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:07.542229   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:07.542237   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:07.542244   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:07.542252   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:07.542259   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:07.542265   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:07.542273   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:07.542279   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:07.542287   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:07.542295   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:07.542302   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:07.542309   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:07.542317   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:07.542324   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:07.542331   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:07.542339   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:07.542347   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:07.542364   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:07.542377   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:07.542385   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:07.542393   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:07.542401   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:07.542409   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:07.542416   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:07.542424   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:09.544286   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 17
	I0906 12:44:09.544299   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:09.544356   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:09.545175   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:09.545214   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:09.545225   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:09.545235   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:09.545245   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:09.545254   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:09.545266   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:09.545285   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:09.545305   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:09.545315   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:09.545324   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:09.545331   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:09.545339   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:09.545350   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:09.545358   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:09.545367   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:09.545374   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:09.545381   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:09.545389   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:09.545404   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:09.545423   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:09.545432   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:09.545440   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:09.545447   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:09.545454   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:09.545468   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:09.545486   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:09.545497   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:09.545504   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:09.545512   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:09.545520   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:09.545535   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:09.545545   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:09.545555   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:09.545564   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:09.545571   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:09.545578   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:09.545589   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:09.545603   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:11.547422   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 18
	I0906 12:44:11.547439   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:11.547508   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:11.548305   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:11.548360   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:11.548371   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:11.548411   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:11.548421   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:11.548428   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:11.548435   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:11.548462   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:11.548473   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:11.548484   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:11.548493   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:11.548500   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:11.548507   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:11.548516   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:11.548524   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:11.548532   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:11.548541   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:11.548549   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:11.548557   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:11.548565   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:11.548582   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:11.548595   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:11.548604   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:11.548612   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:11.548624   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:11.548633   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:11.548641   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:11.548649   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:11.548657   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:11.548663   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:11.548671   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:11.548679   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:11.548686   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:11.548695   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:11.548702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:11.548710   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:11.548718   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:11.548725   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:11.548741   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:13.549721   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 19
	I0906 12:44:13.549736   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:13.549806   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:13.550592   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:13.550653   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:13.550662   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:13.550685   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:13.550694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:13.550701   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:13.550708   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:13.550715   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:13.550722   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:13.550730   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:13.550753   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:13.550769   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:13.550780   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:13.550788   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:13.550796   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:13.550803   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:13.550811   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:13.550818   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:13.550826   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:13.550846   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:13.550858   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:13.550870   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:13.550879   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:13.550886   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:13.550892   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:13.550899   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:13.550907   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:13.550922   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:13.550936   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:13.550945   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:13.550953   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:13.550960   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:13.550968   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:13.550976   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:13.550984   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:13.550994   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:13.551003   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:13.551010   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:13.551019   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:15.552904   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 20
	I0906 12:44:15.552917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:15.552961   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:15.553763   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:15.553819   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:15.553828   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:15.553837   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:15.553843   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:15.553857   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:15.553867   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:15.553874   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:15.553880   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:15.553895   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:15.553905   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:15.553914   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:15.553929   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:15.553946   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:15.553959   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:15.553967   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:15.553976   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:15.553987   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:15.553996   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:15.554003   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:15.554011   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:15.554018   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:15.554026   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:15.554033   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:15.554042   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:15.554050   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:15.554058   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:15.554065   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:15.554073   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:15.554080   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:15.554089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:15.554101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:15.554112   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:15.554120   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:15.554132   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:15.554140   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:15.554148   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:15.554156   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:15.554164   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:17.554188   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 21
	I0906 12:44:17.554202   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:17.554267   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:17.555075   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:17.555154   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:17.555165   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:17.555174   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:17.555180   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:17.555195   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:17.555201   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:17.555212   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:17.555223   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:17.555245   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:17.555261   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:17.555270   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:17.555278   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:17.555287   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:17.555295   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:17.555303   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:17.555311   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:17.555325   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:17.555333   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:17.555339   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:17.555347   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:17.555355   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:17.555363   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:17.555370   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:17.555378   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:17.555385   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:17.555394   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:17.555400   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:17.555409   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:17.555416   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:17.555422   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:17.555431   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:17.555438   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:17.555447   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:17.555454   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:17.555469   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:17.555477   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:17.555484   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:17.555493   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:19.557375   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 22
	I0906 12:44:19.557391   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:19.557420   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:19.558261   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:19.558301   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:19.558315   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:19.558323   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:19.558330   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:19.558338   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:19.558346   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:19.558353   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:19.558359   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:19.558367   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:19.558374   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:19.558383   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:19.558390   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:19.558398   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:19.558404   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:19.558412   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:19.558419   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:19.558427   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:19.558439   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:19.558457   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:19.558465   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:19.558474   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:19.558482   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:19.558491   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:19.558499   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:19.558507   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:19.558519   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:19.558530   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:19.558550   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:19.558563   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:19.558578   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:19.558591   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:19.558606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:19.558619   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:19.558627   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:19.558652   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:19.558660   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:19.558668   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:19.558675   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:21.558582   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 23
	I0906 12:44:21.558596   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:21.558620   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:21.559439   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:21.559503   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:21.559511   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:21.559521   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:21.559530   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:21.559540   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:21.559547   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:21.559554   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:21.559562   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:21.559580   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:21.559590   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:21.559600   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:21.559607   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:21.559613   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:21.559620   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:21.559627   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:21.559635   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:21.559643   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:21.559652   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:21.559659   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:21.559665   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:21.559672   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:21.559678   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:21.559686   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:21.559694   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:21.559702   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:21.559710   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:21.559717   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:21.559725   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:21.559730   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:21.559738   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:21.559746   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:21.559763   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:21.559781   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:21.559789   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:21.559797   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:21.559804   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:21.559814   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:21.559829   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:23.560713   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 24
	I0906 12:44:23.560727   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:23.560801   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:23.561601   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:23.561666   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:23.561693   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:23.561704   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:23.561712   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:23.561723   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:23.561744   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:23.561756   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:23.561763   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:23.561779   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:23.561787   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:23.561808   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:23.561822   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:23.561832   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:23.561840   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:23.561854   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:23.561866   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:23.561879   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:23.561886   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:23.561894   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:23.561900   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:23.561913   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:23.561926   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:23.561935   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:23.561941   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:23.561948   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:23.561956   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:23.561966   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:23.561973   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:23.561981   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:23.561989   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:23.562001   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:23.562009   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:23.562016   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:23.562024   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:23.562038   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:23.562051   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:23.562059   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:23.562068   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:25.563881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 25
	I0906 12:44:25.563893   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:25.563941   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:25.564727   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:25.564796   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:25.564804   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:25.564812   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:25.564818   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:25.564832   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:25.564838   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:25.564846   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:25.564853   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:25.564860   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:25.564866   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:25.564881   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:25.564894   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:25.564912   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:25.564920   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:25.564928   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:25.564936   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:25.564943   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:25.564951   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:25.564967   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:25.564986   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:25.565008   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:25.565022   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:25.565030   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:25.565038   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:25.565049   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:25.565058   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:25.565067   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:25.565076   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:25.565086   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:25.565094   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:25.565101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:25.565107   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:25.565122   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:25.565134   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:25.565149   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:25.565166   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:25.565176   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:25.565184   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:27.567017   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 26
	I0906 12:44:27.567034   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:27.567087   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:27.567871   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:27.567917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:27.567929   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:27.567939   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:27.567945   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:27.567962   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:27.567969   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:27.567985   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:27.567998   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:27.568014   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:27.568033   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:27.568044   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:27.568053   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:27.568060   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:27.568066   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:27.568077   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:27.568088   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:27.568098   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:27.568106   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:27.568112   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:27.568118   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:27.568125   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:27.568133   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:27.568141   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:27.568155   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:27.568162   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:27.568168   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:27.568184   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:27.568196   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:27.568204   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:27.568215   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:27.568224   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:27.568231   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:27.568242   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:27.568249   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:27.568257   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:27.568273   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:27.568285   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:27.568295   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:29.568915   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 27
	I0906 12:44:29.568931   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:29.568991   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:29.569762   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:29.569861   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:29.569876   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:29.569888   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:29.569895   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:29.569902   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:29.569909   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:29.569916   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:29.569923   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:29.569930   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:29.569937   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:29.569946   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:29.569955   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:29.569963   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:29.569971   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:29.569980   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:29.569988   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:29.569998   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:29.570005   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:29.570011   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:29.570018   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:29.570024   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:29.570030   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:29.570046   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:29.570058   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:29.570065   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:29.570072   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:29.570089   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:29.570101   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:29.570112   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:29.570120   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:29.570130   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:29.570138   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:29.570145   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:29.570153   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:29.570159   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:29.570166   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:29.570173   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:29.570180   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:31.572036   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 28
	I0906 12:44:31.572050   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:31.572123   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:31.572908   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:31.572978   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:31.572991   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:31.573000   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:31.573007   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:31.573033   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:31.573043   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:31.573051   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:31.573057   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:31.573065   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:31.573071   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:31.573090   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:31.573099   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:31.573106   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:31.573114   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:31.573121   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:31.573127   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:31.573133   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:31.573142   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:31.573150   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:31.573160   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:31.573169   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:31.573175   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:31.573186   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:31.573193   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:31.573201   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:31.573209   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:31.573215   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:31.573231   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:31.573243   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:31.573259   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:31.573267   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:31.573275   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:31.573283   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:31.573290   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:31.573298   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:31.573309   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:31.573321   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:31.573336   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:33.573646   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Attempt 29
	I0906 12:44:33.573671   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:44:33.573740   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | hyperkit pid from json: 14294
	I0906 12:44:33.574509   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Searching for 1e:c4:1d:e1:8f:58 in /var/db/dhcpd_leases ...
	I0906 12:44:33.574572   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:44:33.574584   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:44:33.574593   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:44:33.574599   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:44:33.574606   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:44:33.574615   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:44:33.574622   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:44:33.574628   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:44:33.574644   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:44:33.574655   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:44:33.574674   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:44:33.574682   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:44:33.574693   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:44:33.574703   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:44:33.574710   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:44:33.574718   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:44:33.574725   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:44:33.574733   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:44:33.574740   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:44:33.574747   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:44:33.574754   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:44:33.574762   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:44:33.574776   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:44:33.574788   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:44:33.574812   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:44:33.574826   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:44:33.574843   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:44:33.574854   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:44:33.574863   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:44:33.574870   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:44:33.574878   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:44:33.574887   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:44:33.574893   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:44:33.574901   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:44:33.574909   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:44:33.574917   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:44:33.574936   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:44:33.574949   14223 main.go:141] libmachine: (force-systemd-flag-489000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:44:35.575504   14223 client.go:171] duration metric: took 1m0.812749274s to LocalClient.Create
	I0906 12:44:37.577344   14223 start.go:128] duration metric: took 1m2.845852864s to createHost
	I0906 12:44:37.577360   14223 start.go:83] releasing machines lock for "force-systemd-flag-489000", held for 1m2.845979753s
	W0906 12:44:37.577446   14223 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-489000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:c4:1d:e1:8f:58
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-489000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:c4:1d:e1:8f:58
	I0906 12:44:37.661423   14223 out.go:201] 
	W0906 12:44:37.682611   14223 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:c4:1d:e1:8f:58
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 1e:c4:1d:e1:8f:58
	W0906 12:44:37.682625   14223 out.go:270] * 
	* 
	W0906 12:44:37.683338   14223 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:44:37.744565   14223 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-489000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-489000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-489000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (189.703023ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-489000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-489000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-06 12:44:38.148989 -0700 PDT m=+4555.257453240
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-489000 -n force-systemd-flag-489000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-489000 -n force-systemd-flag-489000: exit status 7 (82.921776ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:44:38.229656   14317 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:44:38.229679   14317 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-489000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-489000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-489000: (5.256430135s)
--- FAIL: TestForceSystemdFlag (251.91s)

                                                
                                    
x
+
TestForceSystemdEnv (236.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-823000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0906 12:37:42.615149    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:37:59.521972    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:38:13.455935    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-823000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m51.274412419s)

                                                
                                                
-- stdout --
	* [force-systemd-env-823000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-823000" primary control-plane node in "force-systemd-env-823000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-823000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:37:37.789897   14172 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:37:37.790096   14172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:37.790102   14172 out.go:358] Setting ErrFile to fd 2...
	I0906 12:37:37.790106   14172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:37:37.790283   14172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:37:37.791732   14172 out.go:352] Setting JSON to false
	I0906 12:37:37.814220   14172 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13028,"bootTime":1725638429,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:37:37.814335   14172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:37:37.835757   14172 out.go:177] * [force-systemd-env-823000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:37:37.876547   14172 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:37:37.876579   14172 notify.go:220] Checking for updates...
	I0906 12:37:37.918480   14172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:37:37.939574   14172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:37:37.960309   14172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:37:37.980497   14172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:37:38.002563   14172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0906 12:37:38.023723   14172 config.go:182] Loaded profile config "offline-docker-273000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:37:38.023802   14172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:37:38.052540   14172 out.go:177] * Using the hyperkit driver based on user configuration
	I0906 12:37:38.094309   14172 start.go:297] selected driver: hyperkit
	I0906 12:37:38.094320   14172 start.go:901] validating driver "hyperkit" against <nil>
	I0906 12:37:38.094330   14172 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:37:38.097205   14172 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:38.097317   14172 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:37:38.105655   14172 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:37:38.109436   14172 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:37:38.109455   14172 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:37:38.109484   14172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 12:37:38.109681   14172 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 12:37:38.109709   14172 cni.go:84] Creating CNI manager for ""
	I0906 12:37:38.109723   14172 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 12:37:38.109727   14172 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 12:37:38.109790   14172 start.go:340] cluster config:
	{Name:force-systemd-env-823000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:37:38.109875   14172 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:37:38.151499   14172 out.go:177] * Starting "force-systemd-env-823000" primary control-plane node in "force-systemd-env-823000" cluster
	I0906 12:37:38.172314   14172 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:37:38.172339   14172 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:37:38.172353   14172 cache.go:56] Caching tarball of preloaded images
	I0906 12:37:38.172447   14172 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:37:38.172456   14172 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:37:38.172528   14172 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/force-systemd-env-823000/config.json ...
	I0906 12:37:38.172545   14172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/force-systemd-env-823000/config.json: {Name:mk819e1d0afd6206f8480e55160c7607b2a4c65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:37:38.172887   14172 start.go:360] acquireMachinesLock for force-systemd-env-823000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:38:20.112858   14172 start.go:364] duration metric: took 41.940276856s to acquireMachinesLock for "force-systemd-env-823000"
	I0906 12:38:20.112914   14172 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-823000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-
systemd-env-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:38:20.112972   14172 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:38:20.134377   14172 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:38:20.134541   14172 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:38:20.134584   14172 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:38:20.143271   14172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58113
	I0906 12:38:20.143610   14172 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:38:20.144008   14172 main.go:141] libmachine: Using API Version  1
	I0906 12:38:20.144019   14172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:38:20.144229   14172 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:38:20.144341   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .GetMachineName
	I0906 12:38:20.144434   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .DriverName
	I0906 12:38:20.144531   14172 start.go:159] libmachine.API.Create for "force-systemd-env-823000" (driver="hyperkit")
	I0906 12:38:20.144561   14172 client.go:168] LocalClient.Create starting
	I0906 12:38:20.144601   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:38:20.144650   14172 main.go:141] libmachine: Decoding PEM data...
	I0906 12:38:20.144664   14172 main.go:141] libmachine: Parsing certificate...
	I0906 12:38:20.144726   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:38:20.144766   14172 main.go:141] libmachine: Decoding PEM data...
	I0906 12:38:20.144780   14172 main.go:141] libmachine: Parsing certificate...
	I0906 12:38:20.144793   14172 main.go:141] libmachine: Running pre-create checks...
	I0906 12:38:20.144801   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .PreCreateCheck
	I0906 12:38:20.144866   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.145069   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .GetConfigRaw
	I0906 12:38:20.197186   14172 main.go:141] libmachine: Creating machine...
	I0906 12:38:20.197195   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .Create
	I0906 12:38:20.197282   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.197415   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:38:20.197277   14186 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:38:20.197513   14172 main.go:141] libmachine: (force-systemd-env-823000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:38:20.406694   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:38:20.406593   14186 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/id_rsa...
	I0906 12:38:20.481581   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:38:20.481509   14186 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/force-systemd-env-823000.rawdisk...
	I0906 12:38:20.481592   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Writing magic tar header
	I0906 12:38:20.481607   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Writing SSH key tar header
	I0906 12:38:20.482187   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:38:20.482149   14186 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000 ...
	I0906 12:38:20.860907   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.860927   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/hyperkit.pid
	I0906 12:38:20.860941   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Using UUID 78a3c82d-67e6-4a32-8fe0-1d28496f4ddc
	I0906 12:38:20.885954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Generated MAC 5e:33:d0:b1:3a:e4
	I0906 12:38:20.885969   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-823000
	I0906 12:38:20.886029   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"78a3c82d-67e6-4a32-8fe0-1d28496f4ddc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:38:20.886071   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"78a3c82d-67e6-4a32-8fe0-1d28496f4ddc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:38:20.886139   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "78a3c82d-67e6-4a32-8fe0-1d28496f4ddc", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/force-systemd-env-823000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-sys
temd-env-823000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-823000"}
	I0906 12:38:20.886192   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 78a3c82d-67e6-4a32-8fe0-1d28496f4ddc -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/force-systemd-env-823000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/bzimage,/Users/jenkins/minikube-integration/19
576-7784/.minikube/machines/force-systemd-env-823000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-823000"
	I0906 12:38:20.886238   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:38:20.889219   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 DEBUG: hyperkit: Pid is 14187
	I0906 12:38:20.889658   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 0
	I0906 12:38:20.889686   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:20.889768   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:20.890731   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:20.890816   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:20.890835   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:20.890880   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:20.890908   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:20.890927   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:20.890955   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:20.890968   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:20.890994   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:20.891004   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:20.891023   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:20.891035   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:20.891043   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:20.891049   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:20.891066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:20.891075   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:20.891084   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:20.891090   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:20.891097   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:20.891104   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:20.891109   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:20.891116   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:20.891122   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:20.891130   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:20.891140   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:20.891147   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:20.891156   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:20.891166   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:20.891177   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:20.891185   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:20.891197   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:20.891206   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:20.891214   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:20.891223   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:20.891232   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:20.891240   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:20.891251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:20.891262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:20.891271   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:20.896720   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:38:20.904753   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:38:20.905703   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:38:20.905754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:38:20.905782   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:38:20.905794   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:38:21.285529   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:38:21.285544   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:38:21.400183   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:38:21.400203   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:38:21.400232   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:38:21.400250   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:38:21.401083   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:38:21.401093   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:38:22.891323   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 1
	I0906 12:38:22.891337   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:22.891392   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:22.892166   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:22.892249   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:22.892261   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:22.892272   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:22.892279   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:22.892286   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:22.892292   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:22.892299   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:22.892304   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:22.892318   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:22.892332   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:22.892341   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:22.892347   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:22.892353   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:22.892372   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:22.892390   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:22.892402   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:22.892411   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:22.892419   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:22.892426   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:22.892434   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:22.892440   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:22.892447   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:22.892455   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:22.892461   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:22.892468   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:22.892475   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:22.892481   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:22.892488   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:22.892502   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:22.892511   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:22.892518   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:22.892526   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:22.892535   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:22.892542   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:22.892549   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:22.892557   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:22.892564   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:22.892577   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:24.894489   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 2
	I0906 12:38:24.894506   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:24.894627   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:24.895395   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:24.895462   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:24.895471   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:24.895479   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:24.895504   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:24.895513   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:24.895521   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:24.895527   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:24.895545   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:24.895552   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:24.895565   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:24.895576   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:24.895588   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:24.895598   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:24.895605   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:24.895614   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:24.895625   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:24.895634   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:24.895641   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:24.895654   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:24.895663   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:24.895671   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:24.895678   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:24.895685   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:24.895692   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:24.895699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:24.895709   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:24.895723   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:24.895734   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:24.895742   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:24.895749   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:24.895757   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:24.895763   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:24.895771   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:24.895778   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:24.895786   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:24.895793   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:24.895802   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:24.895810   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:26.783720   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:38:26.783908   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:38:26.783920   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:38:26.803731   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:38:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:38:26.897896   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 3
	I0906 12:38:26.897935   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:26.898055   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:26.899081   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:26.899210   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:26.899222   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:26.899232   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:26.899240   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:26.899262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:26.899280   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:26.899293   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:26.899308   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:26.899322   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:26.899335   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:26.899355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:26.899370   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:26.899382   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:26.899393   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:26.899404   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:26.899415   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:26.899424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:26.899435   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:26.899445   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:26.899456   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:26.899477   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:26.899498   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:26.899509   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:26.899523   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:26.899545   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:26.899556   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:26.899567   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:26.899577   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:26.899594   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:26.899610   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:26.899621   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:26.899631   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:26.899641   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:26.899652   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:26.899666   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:26.899677   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:26.899698   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:26.899714   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:28.899836   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 4
	I0906 12:38:28.899850   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:28.899928   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:28.900712   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:28.900788   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:28.900802   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:28.900811   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:28.900823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:28.900836   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:28.900850   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:28.900858   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:28.900864   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:28.900883   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:28.900894   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:28.900901   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:28.900916   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:28.900935   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:28.900946   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:28.900954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:28.900962   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:28.900969   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:28.900977   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:28.900992   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:28.901001   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:28.901008   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:28.901015   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:28.901021   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:28.901028   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:28.901036   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:28.901052   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:28.901065   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:28.901073   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:28.901082   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:28.901089   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:28.901096   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:28.901107   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:28.901118   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:28.901136   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:28.901148   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:28.901161   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:28.901170   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:28.901181   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:30.902382   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 5
	I0906 12:38:30.902394   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:30.902458   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:30.903235   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:30.903292   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:30.903316   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:30.903323   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:30.903331   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:30.903338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:30.903356   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:30.903372   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:30.903380   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:30.903386   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:30.903412   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:30.903422   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:30.903444   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:30.903453   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:30.903461   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:30.903478   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:30.903485   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:30.903492   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:30.903502   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:30.903510   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:30.903519   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:30.903526   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:30.903534   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:30.903541   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:30.903548   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:30.903567   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:30.903576   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:30.903595   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:30.903606   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:30.903615   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:30.903633   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:30.903640   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:30.903650   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:30.903659   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:30.903666   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:30.903673   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:30.903679   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:30.903687   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:30.903696   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:32.903549   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 6
	I0906 12:38:32.903563   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:32.903598   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:32.904355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:32.904426   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:32.904436   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:32.904455   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:32.904462   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:32.904469   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:32.904476   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:32.904482   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:32.904491   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:32.904498   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:32.904505   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:32.904536   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:32.904551   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:32.904564   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:32.904574   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:32.904584   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:32.904592   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:32.904600   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:32.904607   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:32.904616   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:32.904638   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:32.904651   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:32.904659   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:32.904667   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:32.904674   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:32.904680   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:32.904689   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:32.904698   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:32.904706   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:32.904716   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:32.904723   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:32.904729   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:32.904743   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:32.904755   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:32.904763   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:32.904770   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:32.904788   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:32.904796   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:32.904806   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:34.906099   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 7
	I0906 12:38:34.906113   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:34.906181   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:34.906938   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:34.907007   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:34.907019   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:34.907039   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:34.907051   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:34.907069   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:34.907076   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:34.907086   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:34.907092   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:34.907099   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:34.907107   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:34.907113   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:34.907120   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:34.907126   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:34.907132   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:34.907141   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:34.907147   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:34.907153   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:34.907165   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:34.907172   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:34.907180   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:34.907187   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:34.907195   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:34.907208   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:34.907218   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:34.907227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:34.907236   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:34.907243   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:34.907251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:34.907268   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:34.907280   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:34.907293   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:34.907303   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:34.907311   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:34.907321   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:34.907330   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:34.907338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:34.907344   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:34.907352   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:36.908259   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 8
	I0906 12:38:36.908273   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:36.908349   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:36.909112   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:36.909176   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:36.909188   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:36.909198   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:36.909204   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:36.909211   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:36.909220   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:36.909227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:36.909241   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:36.909263   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:36.909272   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:36.909279   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:36.909290   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:36.909300   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:36.909307   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:36.909314   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:36.909321   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:36.909336   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:36.909348   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:36.909363   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:36.909372   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:36.909379   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:36.909389   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:36.909396   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:36.909402   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:36.909424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:36.909437   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:36.909445   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:36.909451   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:36.909457   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:36.909463   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:36.909472   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:36.909478   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:36.909484   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:36.909492   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:36.909499   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:36.909504   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:36.909511   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:36.909520   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:38.910352   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 9
	I0906 12:38:38.910367   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:38.910443   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:38.911189   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:38.911266   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:38.911280   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:38.911290   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:38.911298   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:38.911306   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:38.911314   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:38.911332   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:38.911343   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:38.911350   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:38.911360   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:38.911376   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:38.911385   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:38.911392   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:38.911400   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:38.911408   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:38.911424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:38.911436   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:38.911444   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:38.911451   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:38.911459   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:38.911467   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:38.911474   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:38.911481   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:38.911487   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:38.911496   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:38.911503   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:38.911510   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:38.911517   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:38.911525   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:38.911531   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:38.911539   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:38.911546   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:38.911557   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:38.911565   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:38.911573   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:38.911580   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:38.911585   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:38.911599   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:40.911812   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 10
	I0906 12:38:40.911824   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:40.911896   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:40.912650   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:40.912717   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:40.912726   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:40.912735   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:40.912763   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:40.912775   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:40.912785   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:40.912795   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:40.912813   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:40.912823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:40.912831   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:40.912839   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:40.912856   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:40.912868   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:40.912877   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:40.912893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:40.912902   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:40.912910   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:40.912917   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:40.912928   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:40.912937   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:40.912948   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:40.912958   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:40.912964   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:40.912977   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:40.912985   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:40.912993   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:40.913001   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:40.913008   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:40.913015   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:40.913021   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:40.913029   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:40.913036   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:40.913042   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:40.913048   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:40.913055   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:40.913064   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:40.913081   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:40.913093   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:42.913230   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 11
	I0906 12:38:42.913245   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:42.913312   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:42.914070   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:42.914139   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:42.914153   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:42.914167   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:42.914178   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:42.914192   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:42.914206   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:42.914214   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:42.914221   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:42.914239   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:42.914252   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:42.914260   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:42.914269   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:42.914276   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:42.914283   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:42.914289   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:42.914305   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:42.914311   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:42.914318   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:42.914326   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:42.914333   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:42.914340   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:42.914347   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:42.914355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:42.914362   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:42.914370   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:42.914377   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:42.914386   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:42.914393   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:42.914401   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:42.914410   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:42.914418   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:42.914429   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:42.914439   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:42.914456   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:42.914463   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:42.914472   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:42.914481   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:42.914489   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:44.916375   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 12
	I0906 12:38:44.916387   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:44.916447   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:44.917219   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:44.917292   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:44.917300   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:44.917314   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:44.917323   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:44.917339   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:44.917361   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:44.917370   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:44.917380   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:44.917390   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:44.917396   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:44.917403   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:44.917409   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:44.917417   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:44.917423   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:44.917438   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:44.917452   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:44.917460   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:44.917467   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:44.917474   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:44.917482   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:44.917489   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:44.917495   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:44.917502   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:44.917527   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:44.917535   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:44.917542   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:44.917550   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:44.917559   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:44.917566   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:44.917573   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:44.917578   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:44.917585   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:44.917593   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:44.917601   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:44.917608   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:44.917616   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:44.917631   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:44.917640   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:46.918852   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 13
	I0906 12:38:46.918869   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:46.918913   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:46.919678   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:46.919740   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:46.919752   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:46.919800   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:46.919811   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:46.919818   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:46.919825   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:46.919831   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:46.919837   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:46.919843   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:46.919859   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:46.919871   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:46.919879   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:46.919887   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:46.919899   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:46.919909   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:46.919917   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:46.919925   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:46.919931   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:46.919939   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:46.919953   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:46.919964   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:46.919980   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:46.919993   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:46.920009   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:46.920017   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:46.920024   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:46.920032   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:46.920038   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:46.920046   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:46.920061   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:46.920081   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:46.920089   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:46.920098   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:46.920105   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:46.920114   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:46.920121   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:46.920129   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:46.920138   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:48.920928   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 14
	I0906 12:38:48.920943   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:48.920974   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:48.921752   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:48.921805   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:48.921817   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:48.921834   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:48.921850   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:48.921861   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:48.921868   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:48.921885   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:48.921894   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:48.921908   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:48.921922   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:48.921930   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:48.921942   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:48.921950   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:48.921958   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:48.921965   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:48.921972   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:48.921987   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:48.921998   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:48.922013   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:48.922026   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:48.922040   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:48.922049   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:48.922056   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:48.922063   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:48.922071   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:48.922079   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:48.922086   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:48.922101   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:48.922119   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:48.922131   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:48.922142   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:48.922155   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:48.922170   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:48.922176   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:48.922185   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:48.922194   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:48.922201   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:48.922207   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:50.922573   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 15
	I0906 12:38:50.922586   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:50.922652   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:50.923460   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:50.923507   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:50.923523   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:50.923561   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:50.923571   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:50.923578   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:50.923584   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:50.923596   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:50.923609   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:50.923629   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:50.923637   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:50.923655   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:50.923667   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:50.923675   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:50.923681   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:50.923695   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:50.923705   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:50.923712   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:50.923720   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:50.923727   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:50.923733   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:50.923739   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:50.923746   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:50.923758   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:50.923771   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:50.923780   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:50.923786   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:50.923792   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:50.923798   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:50.923806   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:50.923813   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:50.923821   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:50.923835   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:50.923853   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:50.923872   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:50.923881   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:50.923898   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:50.923911   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:50.923920   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:52.924452   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 16
	I0906 12:38:52.924469   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:52.924495   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:52.925261   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:52.925330   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:52.925341   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:52.925349   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:52.925355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:52.925361   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:52.925368   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:52.925381   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:52.925388   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:52.925394   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:52.925401   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:52.925416   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:52.925430   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:52.925442   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:52.925461   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:52.925473   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:52.925481   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:52.925488   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:52.925494   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:52.925501   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:52.925513   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:52.925527   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:52.925542   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:52.925554   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:52.925567   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:52.925577   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:52.925585   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:52.925593   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:52.925600   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:52.925606   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:52.925612   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:52.925628   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:52.925637   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:52.925644   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:52.925652   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:52.925658   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:52.925675   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:52.925684   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:52.925692   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:54.926871   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 17
	I0906 12:38:54.926887   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:54.926938   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:54.927713   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:54.927764   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:54.927774   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:54.927785   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:54.927797   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:54.927806   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:54.927816   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:54.927828   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:54.927842   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:54.927851   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:54.927858   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:54.927873   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:54.927884   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:54.927893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:54.927900   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:54.927908   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:54.927917   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:54.927924   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:54.927933   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:54.927940   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:54.927947   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:54.927954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:54.927972   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:54.927980   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:54.927988   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:54.927995   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:54.928002   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:54.928009   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:54.928017   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:54.928033   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:54.928047   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:54.928055   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:54.928069   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:54.928080   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:54.928100   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:54.928107   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:54.928121   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:54.928128   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:54.928137   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:56.929620   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 18
	I0906 12:38:56.929633   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:56.929725   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:56.930468   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:56.930543   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:56.930552   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:56.930576   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:56.930585   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:56.930604   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:56.930617   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:56.930625   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:56.930639   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:56.930651   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:56.930661   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:56.930669   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:56.930685   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:56.930693   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:56.930700   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:56.930708   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:56.930715   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:56.930723   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:56.930740   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:56.930750   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:56.930759   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:56.930766   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:56.930773   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:56.930779   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:56.930788   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:56.930796   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:56.930804   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:56.930822   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:56.930834   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:56.930843   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:56.930851   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:56.930858   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:56.930866   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:56.930877   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:56.930886   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:56.930893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:56.930902   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:56.930909   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:56.930915   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:38:58.932740   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 19
	I0906 12:38:58.932754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:38:58.932824   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:38:58.933588   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:38:58.933661   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:38:58.933671   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:38:58.933679   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:38:58.933691   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:38:58.933698   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:38:58.933705   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:38:58.933712   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:38:58.933719   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:38:58.933725   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:38:58.933731   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:38:58.933738   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:38:58.933745   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:38:58.933754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:38:58.933763   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:38:58.933781   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:38:58.933793   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:38:58.933801   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:38:58.933809   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:38:58.933824   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:38:58.933838   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:38:58.933845   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:38:58.933852   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:38:58.933866   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:38:58.933879   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:38:58.933897   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:38:58.933906   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:38:58.933914   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:38:58.933922   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:38:58.933929   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:38:58.933937   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:38:58.933944   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:38:58.933951   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:38:58.933960   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:38:58.933967   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:38:58.933975   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:38:58.933986   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:38:58.933992   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:38:58.934003   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:00.935844   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 20
	I0906 12:39:00.935859   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:00.935924   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:00.936689   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:00.936751   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:00.936761   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:00.936773   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:00.936785   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:00.936792   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:00.936798   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:00.936807   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:00.936816   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:00.936825   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:00.936843   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:00.936853   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:00.936863   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:00.936876   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:00.936885   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:00.936893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:00.936916   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:00.936928   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:00.936937   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:00.936946   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:00.936955   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:00.936962   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:00.936968   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:00.936975   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:00.936987   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:00.936998   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:00.937007   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:00.937014   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:00.937026   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:00.937038   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:00.937046   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:00.937055   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:00.937066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:00.937074   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:00.937082   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:00.937090   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:00.937097   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:00.937105   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:00.937114   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:02.937496   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 21
	I0906 12:39:02.937515   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:02.937607   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:02.938343   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:02.938408   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:02.938420   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:02.938428   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:02.938435   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:02.938461   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:02.938473   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:02.938481   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:02.938488   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:02.938501   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:02.938514   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:02.938523   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:02.938529   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:02.938535   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:02.938545   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:02.938552   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:02.938560   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:02.938572   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:02.938582   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:02.938589   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:02.938596   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:02.938605   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:02.938613   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:02.938620   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:02.938628   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:02.938645   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:02.938657   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:02.938665   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:02.938673   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:02.938681   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:02.938689   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:02.938703   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:02.938725   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:02.938733   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:02.938757   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:02.938770   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:02.938783   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:02.938792   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:02.938801   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:04.940287   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 22
	I0906 12:39:04.940299   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:04.940386   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:04.941176   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:04.941241   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:04.941253   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:04.941260   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:04.941270   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:04.941291   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:04.941302   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:04.941325   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:04.941338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:04.941345   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:04.941353   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:04.941361   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:04.941366   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:04.941378   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:04.941385   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:04.941393   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:04.941401   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:04.941408   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:04.941415   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:04.941424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:04.941433   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:04.941441   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:04.941453   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:04.941461   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:04.941476   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:04.941486   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:04.941495   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:04.941501   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:04.941507   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:04.941514   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:04.941526   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:04.941534   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:04.941541   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:04.941549   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:04.941563   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:04.941571   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:04.941590   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:04.941597   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:04.941605   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:06.943233   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 23
	I0906 12:39:06.943249   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:06.943304   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:06.944066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:06.944131   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:06.944144   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:06.944156   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:06.944163   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:06.944190   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:06.944202   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:06.944209   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:06.944217   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:06.944224   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:06.944231   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:06.944238   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:06.944244   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:06.944251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:06.944259   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:06.944268   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:06.944280   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:06.944288   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:06.944296   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:06.944303   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:06.944311   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:06.944318   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:06.944325   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:06.944332   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:06.944340   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:06.944347   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:06.944353   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:06.944366   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:06.944378   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:06.944386   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:06.944395   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:06.944402   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:06.944410   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:06.944417   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:06.944426   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:06.944438   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:06.944449   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:06.944457   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:06.944466   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:08.944815   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 24
	I0906 12:39:08.944830   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:08.944899   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:08.945689   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:08.945733   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:08.945745   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:08.945753   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:08.945761   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:08.945780   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:08.945792   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:08.945799   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:08.945808   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:08.945824   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:08.945834   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:08.945842   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:08.945849   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:08.945857   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:08.945867   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:08.945874   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:08.945882   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:08.945893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:08.945901   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:08.945907   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:08.945917   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:08.945932   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:08.945944   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:08.945954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:08.945970   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:08.945978   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:08.945986   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:08.946000   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:08.946009   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:08.946016   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:08.946025   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:08.946033   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:08.946041   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:08.946048   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:08.946056   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:08.946066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:08.946073   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:08.946080   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:08.946088   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:10.946578   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 25
	I0906 12:39:10.946601   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:10.946674   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:10.947440   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:10.947526   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:10.947538   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:10.947548   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:10.947554   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:10.947561   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:10.947569   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:10.947587   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:10.947598   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:10.947614   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:10.947621   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:10.947637   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:10.947646   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:10.947654   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:10.947665   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:10.947672   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:10.947683   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:10.947691   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:10.947699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:10.947706   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:10.947713   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:10.947721   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:10.947728   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:10.947735   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:10.947742   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:10.947750   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:10.947763   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:10.947776   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:10.947785   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:10.947793   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:10.947800   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:10.947817   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:10.947828   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:10.947853   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:10.947865   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:10.947880   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:10.947889   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:10.947896   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:10.947903   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:12.949738   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 26
	I0906 12:39:12.949750   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:12.949804   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:12.950581   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:12.950647   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:12.950657   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:12.950668   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:12.950675   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:12.950685   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:12.950693   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:12.950706   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:12.950721   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:12.950733   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:12.950740   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:12.950747   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:12.950754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:12.950764   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:12.950770   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:12.950777   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:12.950791   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:12.950798   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:12.950804   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:12.950820   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:12.950832   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:12.950841   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:12.950849   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:12.950856   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:12.950873   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:12.950879   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:12.950886   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:12.950895   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:12.950904   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:12.950920   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:12.950929   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:12.950936   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:12.950944   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:12.950954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:12.950963   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:12.950970   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:12.950986   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:12.950993   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:12.951001   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:14.952854   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 27
	I0906 12:39:14.952870   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:14.952955   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:14.953699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:14.953775   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:14.953785   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:14.953794   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:14.953799   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:14.953806   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:14.953811   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:14.953818   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:14.953823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:14.953846   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:14.953872   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:14.953879   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:14.953888   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:14.953893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:14.953901   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:14.953909   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:14.953920   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:14.953933   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:14.953941   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:14.953947   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:14.953962   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:14.953975   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:14.953984   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:14.953991   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:14.954005   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:14.954019   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:14.954028   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:14.954034   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:14.954048   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:14.954056   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:14.954070   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:14.954083   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:14.954091   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:14.954100   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:14.954108   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:14.954119   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:14.954128   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:14.954141   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:14.954149   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:16.955954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 28
	I0906 12:39:16.955966   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:16.956049   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:16.956823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:16.956884   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:16.956895   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:16.956902   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:16.956909   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:16.956925   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:16.956933   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:16.956940   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:16.956951   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:16.956959   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:16.956966   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:16.956974   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:16.956981   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:16.956991   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:16.956998   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:16.957005   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:16.957014   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:16.957022   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:16.957029   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:16.957035   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:16.957041   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:16.957049   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:16.957057   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:16.957064   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:16.957081   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:16.957094   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:16.957105   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:16.957113   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:16.957121   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:16.957136   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:16.957144   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:16.957151   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:16.957158   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:16.957169   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:16.957178   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:16.957185   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:16.957191   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:16.957197   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:16.957204   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:18.958466   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 29
	I0906 12:39:18.958484   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:18.958531   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:18.959317   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 5e:33:d0:b1:3a:e4 in /var/db/dhcpd_leases ...
	I0906 12:39:18.959377   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:39:18.959387   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:39:18.959399   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:39:18.959406   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:39:18.959416   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:39:18.959424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:39:18.959437   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:39:18.959454   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:39:18.959463   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:39:18.959470   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:39:18.959485   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:39:18.959499   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:39:18.959508   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:39:18.959516   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:39:18.959528   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:39:18.959536   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:39:18.959543   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:39:18.959550   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:39:18.959558   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:39:18.959571   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:39:18.959579   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:39:18.959593   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:39:18.959601   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:39:18.959609   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:39:18.959617   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:39:18.959625   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:39:18.959631   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:39:18.959639   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:39:18.959650   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:39:18.959659   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:39:18.959668   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:39:18.959675   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:39:18.959690   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:39:18.959700   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:39:18.959708   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:39:18.959721   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:39:18.959729   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:39:18.959739   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:39:20.959936   14172 client.go:171] duration metric: took 1m0.815840114s to LocalClient.Create
	I0906 12:39:22.962004   14172 start.go:128] duration metric: took 1m2.84951224s to createHost
	I0906 12:39:22.962020   14172 start.go:83] releasing machines lock for "force-systemd-env-823000", held for 1m2.849636834s
	W0906 12:39:22.962038   14172 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:33:d0:b1:3a:e4
	I0906 12:39:22.962382   14172 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:39:22.962410   14172 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:39:22.971142   14172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58115
	I0906 12:39:22.971468   14172 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:39:22.971827   14172 main.go:141] libmachine: Using API Version  1
	I0906 12:39:22.971841   14172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:39:22.972082   14172 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:39:22.972456   14172 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:39:22.972478   14172 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:39:22.981010   14172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58117
	I0906 12:39:22.981537   14172 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:39:22.981887   14172 main.go:141] libmachine: Using API Version  1
	I0906 12:39:22.981898   14172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:39:22.982139   14172 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:39:22.982238   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .GetState
	I0906 12:39:22.982338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:22.982401   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:22.983387   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .DriverName
	I0906 12:39:23.004080   14172 out.go:177] * Deleting "force-systemd-env-823000" in hyperkit ...
	I0906 12:39:23.046289   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .Remove
	I0906 12:39:23.046421   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:23.046433   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:23.046510   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:23.047448   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:23.047501   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | waiting for graceful shutdown
	I0906 12:39:24.049440   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:24.049515   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:24.050423   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | waiting for graceful shutdown
	I0906 12:39:25.051237   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:25.051353   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:25.053050   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | waiting for graceful shutdown
	I0906 12:39:26.053515   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:26.053630   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:26.054353   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | waiting for graceful shutdown
	I0906 12:39:27.054900   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:27.054970   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:27.055610   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | waiting for graceful shutdown
	I0906 12:39:28.057642   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:39:28.057661   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14187
	I0906 12:39:28.058647   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | sending sigkill
	I0906 12:39:28.058657   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0906 12:39:28.071290   14172 out.go:270] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:33:d0:b1:3a:e4
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 5e:33:d0:b1:3a:e4
	I0906 12:39:28.071314   14172 start.go:729] Will try again in 5 seconds ...
	I0906 12:39:28.082534   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:39:28 WARN : hyperkit: failed to read stderr: EOF
	I0906 12:39:28.082553   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:39:28 WARN : hyperkit: failed to read stdout: EOF
	I0906 12:39:33.071645   14172 start.go:360] acquireMachinesLock for force-systemd-env-823000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:40:25.821746   14172 start.go:364] duration metric: took 52.750479143s to acquireMachinesLock for "force-systemd-env-823000"
	I0906 12:40:25.821794   14172 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-823000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-
systemd-env-823000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:40:25.821852   14172 start.go:125] createHost starting for "" (driver="hyperkit")
	I0906 12:40:25.864038   14172 out.go:235] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0906 12:40:25.864115   14172 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:40:25.864138   14172 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:40:25.872883   14172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58121
	I0906 12:40:25.873243   14172 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:40:25.873636   14172 main.go:141] libmachine: Using API Version  1
	I0906 12:40:25.873658   14172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:40:25.873874   14172 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:40:25.874007   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .GetMachineName
	I0906 12:40:25.874121   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .DriverName
	I0906 12:40:25.874253   14172 start.go:159] libmachine.API.Create for "force-systemd-env-823000" (driver="hyperkit")
	I0906 12:40:25.874273   14172 client.go:168] LocalClient.Create starting
	I0906 12:40:25.874304   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem
	I0906 12:40:25.874358   14172 main.go:141] libmachine: Decoding PEM data...
	I0906 12:40:25.874373   14172 main.go:141] libmachine: Parsing certificate...
	I0906 12:40:25.874415   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem
	I0906 12:40:25.874454   14172 main.go:141] libmachine: Decoding PEM data...
	I0906 12:40:25.874466   14172 main.go:141] libmachine: Parsing certificate...
	I0906 12:40:25.874478   14172 main.go:141] libmachine: Running pre-create checks...
	I0906 12:40:25.874483   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .PreCreateCheck
	I0906 12:40:25.874567   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:25.874605   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .GetConfigRaw
	I0906 12:40:25.884956   14172 main.go:141] libmachine: Creating machine...
	I0906 12:40:25.884966   14172 main.go:141] libmachine: (force-systemd-env-823000) Calling .Create
	I0906 12:40:25.885084   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:25.885215   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:40:25.885066   14211 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:40:25.885259   14172 main.go:141] libmachine: (force-systemd-env-823000) Downloading /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 12:40:26.238495   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:40:26.238434   14211 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/id_rsa...
	I0906 12:40:26.322339   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:40:26.322252   14211 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/force-systemd-env-823000.rawdisk...
	I0906 12:40:26.322352   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Writing magic tar header
	I0906 12:40:26.322378   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Writing SSH key tar header
	I0906 12:40:26.322732   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | I0906 12:40:26.322694   14211 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000 ...
	I0906 12:40:26.709699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:26.709726   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/hyperkit.pid
	I0906 12:40:26.709784   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Using UUID 2c5785e4-f6fc-4fc6-8d3d-8755f8e4cdc8
	I0906 12:40:26.736344   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Generated MAC 52:d:e8:23:17:3
	I0906 12:40:26.736363   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-823000
	I0906 12:40:26.736399   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c5785e4-f6fc-4fc6-8d3d-8755f8e4cdc8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:40:26.736430   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c5785e4-f6fc-4fc6-8d3d-8755f8e4cdc8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:40:26.736491   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c5785e4-f6fc-4fc6-8d3d-8755f8e4cdc8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/force-systemd-env-823000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-sys
temd-env-823000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-823000"}
	I0906 12:40:26.736536   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c5785e4-f6fc-4fc6-8d3d-8755f8e4cdc8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/force-systemd-env-823000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/bzimage,/Users/jenkins/minikube-integration/19
576-7784/.minikube/machines/force-systemd-env-823000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-823000"
	I0906 12:40:26.736550   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:40:26.739453   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 DEBUG: hyperkit: Pid is 14222
	I0906 12:40:26.739932   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 0
	I0906 12:40:26.739950   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:26.740021   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:26.740939   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:26.741027   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:26.741050   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:26.741078   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:26.741094   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:26.741174   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:26.741208   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:26.741221   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:26.741231   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:26.741241   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:26.741252   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:26.741262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:26.741272   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:26.741283   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:26.741300   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:26.741310   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:26.741321   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:26.741332   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:26.741342   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:26.741356   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:26.741371   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:26.741383   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:26.741395   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:26.741403   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:26.741409   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:26.741415   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:26.741420   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:26.741426   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:26.741436   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:26.741446   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:26.741474   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:26.741486   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:26.741499   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:26.741509   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:26.741523   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:26.741551   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:26.741562   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:26.741577   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:26.741592   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:26.747582   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:40:26.755609   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/force-systemd-env-823000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:40:26.756502   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:40:26.756524   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:40:26.756533   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:40:26.756544   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:40:27.139116   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:40:27.139133   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:40:27.253759   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:40:27.253773   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:40:27.253784   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:40:27.253809   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:40:27.254700   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:40:27.254712   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:27 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:40:28.742488   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 1
	I0906 12:40:28.742503   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:28.742613   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:28.743416   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:28.743490   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:28.743507   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:28.743528   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:28.743543   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:28.743552   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:28.743559   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:28.743570   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:28.743579   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:28.743586   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:28.743592   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:28.743605   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:28.743616   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:28.743633   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:28.743655   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:28.743666   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:28.743675   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:28.743683   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:28.743699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:28.743709   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:28.743717   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:28.743730   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:28.743737   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:28.743744   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:28.743753   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:28.743760   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:28.743766   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:28.743773   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:28.743781   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:28.743788   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:28.743796   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:28.743802   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:28.743808   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:28.743822   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:28.743835   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:28.743848   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:28.743859   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:28.743868   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:28.743878   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:30.745189   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 2
	I0906 12:40:30.745202   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:30.745266   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:30.746092   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:30.746157   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:30.746165   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:30.746178   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:30.746189   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:30.746200   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:30.746209   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:30.746227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:30.746248   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:30.746259   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:30.746269   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:30.746288   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:30.746304   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:30.746311   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:30.746322   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:30.746335   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:30.746342   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:30.746349   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:30.746355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:30.746362   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:30.746368   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:30.746374   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:30.746382   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:30.746401   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:30.746412   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:30.746428   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:30.746440   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:30.746449   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:30.746455   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:30.746470   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:30.746504   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:30.746515   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:30.746526   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:30.746542   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:30.746556   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:30.746569   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:30.746584   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:30.746594   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:30.746603   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:32.654928   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:40:32.655095   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:40:32.655106   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:40:32.674946   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | 2024/09/06 12:40:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:40:32.747524   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 3
	I0906 12:40:32.747563   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:32.747777   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:32.749212   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:32.749394   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:32.749414   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:32.749436   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:32.749466   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:32.749478   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:32.749487   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:32.749498   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:32.749508   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:32.749528   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:32.749537   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:32.749548   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:32.749562   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:32.749572   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:32.749583   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:32.749597   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:32.749608   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:32.749617   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:32.749628   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:32.749638   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:32.749646   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:32.749663   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:32.749683   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:32.749694   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:32.749702   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:32.749718   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:32.749727   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:32.749736   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:32.749749   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:32.749760   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:32.749767   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:32.749793   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:32.749810   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:32.749821   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:32.749834   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:32.749847   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:32.749858   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:32.749879   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:32.749898   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:34.750145   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 4
	I0906 12:40:34.750160   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:34.750240   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:34.751035   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:34.751102   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:34.751118   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:34.751140   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:34.751163   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:34.751171   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:34.751186   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:34.751197   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:34.751206   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:34.751213   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:34.751241   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:34.751255   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:34.751264   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:34.751272   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:34.751285   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:34.751295   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:34.751303   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:34.751311   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:34.751317   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:34.751324   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:34.751334   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:34.751341   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:34.751349   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:34.751356   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:34.751363   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:34.751371   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:34.751378   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:34.751384   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:34.751391   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:34.751399   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:34.751407   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:34.751424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:34.751433   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:34.751440   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:34.751449   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:34.751466   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:34.751480   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:34.751489   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:34.751497   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:36.753310   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 5
	I0906 12:40:36.753322   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:36.753362   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:36.754162   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:36.754220   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:36.754231   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:36.754240   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:36.754247   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:36.754257   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:36.754267   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:36.754278   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:36.754289   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:36.754298   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:36.754317   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:36.754341   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:36.754355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:36.754364   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:36.754373   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:36.754382   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:36.754388   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:36.754395   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:36.754410   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:36.754424   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:36.754436   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:36.754445   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:36.754457   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:36.754470   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:36.754479   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:36.754485   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:36.754501   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:36.754516   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:36.754524   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:36.754532   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:36.754540   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:36.754547   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:36.754554   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:36.754561   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:36.754570   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:36.754577   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:36.754585   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:36.754592   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:36.754599   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:38.754836   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 6
	I0906 12:40:38.754852   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:38.754905   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:38.755676   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:38.755743   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:38.755754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:38.755762   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:38.755769   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:38.755775   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:38.755784   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:38.755791   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:38.755799   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:38.755808   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:38.755815   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:38.755830   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:38.755842   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:38.755852   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:38.755861   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:38.755874   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:38.755883   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:38.755890   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:38.755901   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:38.755920   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:38.755932   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:38.755942   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:38.755950   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:38.755958   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:38.755965   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:38.755971   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:38.755979   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:38.755985   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:38.755991   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:38.756002   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:38.756016   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:38.756025   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:38.756035   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:38.756047   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:38.756056   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:38.756066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:38.756075   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:38.756084   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:38.756092   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:40.757961   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 7
	I0906 12:40:40.757987   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:40.758057   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:40.758794   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:40.758882   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:40.758893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:40.758905   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:40.758911   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:40.758936   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:40.758951   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:40.758962   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:40.758973   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:40.758988   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:40.758999   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:40.759007   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:40.759017   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:40.759025   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:40.759032   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:40.759055   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:40.759067   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:40.759074   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:40.759080   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:40.759087   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:40.759094   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:40.759102   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:40.759113   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:40.759120   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:40.759126   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:40.759134   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:40.759148   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:40.759161   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:40.759177   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:40.759188   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:40.759196   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:40.759205   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:40.759225   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:40.759233   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:40.759242   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:40.759249   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:40.759255   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:40.759265   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:40.759296   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:42.761148   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 8
	I0906 12:40:42.761161   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:42.761197   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:42.761976   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:42.762030   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:42.762043   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:42.762056   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:42.762068   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:42.762075   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:42.762082   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:42.762101   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:42.762110   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:42.762117   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:42.762123   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:42.762131   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:42.762137   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:42.762146   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:42.762158   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:42.762166   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:42.762173   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:42.762181   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:42.762193   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:42.762201   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:42.762209   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:42.762217   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:42.762231   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:42.762243   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:42.762251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:42.762259   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:42.762266   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:42.762273   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:42.762281   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:42.762287   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:42.762294   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:42.762317   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:42.762330   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:42.762338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:42.762346   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:42.762352   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:42.762360   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:42.762368   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:42.762384   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:44.763129   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 9
	I0906 12:40:44.763144   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:44.763177   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:44.763969   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:44.764028   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:44.764040   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:44.764054   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:44.764063   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:44.764070   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:44.764078   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:44.764086   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:44.764092   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:44.764098   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:44.764105   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:44.764116   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:44.764124   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:44.764131   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:44.764137   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:44.764149   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:44.764167   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:44.764175   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:44.764200   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:44.764211   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:44.764221   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:44.764229   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:44.764236   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:44.764244   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:44.764252   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:44.764269   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:44.764277   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:44.764283   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:44.764292   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:44.764299   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:44.764305   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:44.764311   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:44.764317   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:44.764329   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:44.764342   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:44.764350   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:44.764356   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:44.764363   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:44.764382   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:46.765551   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 10
	I0906 12:40:46.765564   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:46.765594   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:46.766383   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:46.766446   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:46.766458   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:46.766467   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:46.766473   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:46.766481   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:46.766500   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:46.766509   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:46.766516   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:46.766527   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:46.766541   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:46.766549   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:46.766557   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:46.766569   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:46.766577   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:46.766584   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:46.766600   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:46.766617   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:46.766628   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:46.766640   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:46.766650   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:46.766660   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:46.766670   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:46.766677   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:46.766682   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:46.766689   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:46.766699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:46.766706   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:46.766714   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:46.766726   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:46.766735   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:46.766753   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:46.766766   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:46.766774   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:46.766782   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:46.766792   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:46.766800   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:46.766807   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:46.766824   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:48.768663   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 11
	I0906 12:40:48.768675   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:48.768719   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:48.769492   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:48.769547   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:48.769561   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:48.769576   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:48.769582   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:48.769593   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:48.769605   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:48.769628   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:48.769641   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:48.769649   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:48.769676   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:48.769696   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:48.769706   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:48.769713   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:48.769719   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:48.769728   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:48.769737   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:48.769745   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:48.769752   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:48.769762   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:48.769770   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:48.769778   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:48.769785   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:48.769793   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:48.769800   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:48.769806   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:48.769814   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:48.769820   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:48.769828   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:48.769837   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:48.769845   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:48.769852   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:48.769859   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:48.769866   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:48.769874   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:48.769881   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:48.769894   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:48.769902   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:48.769910   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:50.771647   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 12
	I0906 12:40:50.771662   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:50.771707   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:50.772473   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:50.772546   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:50.772559   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:50.772572   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:50.772579   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:50.772587   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:50.772594   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:50.772608   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:50.772618   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:50.772631   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:50.772643   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:50.772652   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:50.772659   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:50.772672   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:50.772680   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:50.772688   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:50.772697   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:50.772711   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:50.772726   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:50.772735   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:50.772746   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:50.772754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:50.772771   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:50.772781   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:50.772794   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:50.772803   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:50.772810   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:50.772817   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:50.772823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:50.772830   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:50.772837   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:50.772844   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:50.772857   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:50.772869   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:50.772877   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:50.772885   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:50.772893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:50.772902   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:50.772912   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:52.774087   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 13
	I0906 12:40:52.774101   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:52.774164   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:52.774929   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:52.775009   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:52.775020   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:52.775029   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:52.775041   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:52.775049   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:52.775063   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:52.775078   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:52.775086   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:52.775096   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:52.775104   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:52.775115   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:52.775122   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:52.775133   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:52.775141   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:52.775148   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:52.775157   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:52.775164   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:52.775173   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:52.775182   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:52.775190   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:52.775201   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:52.775211   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:52.775218   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:52.775227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:52.775240   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:52.775251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:52.775260   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:52.775280   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:52.775290   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:52.775299   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:52.775306   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:52.775314   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:52.775330   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:52.775342   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:52.775357   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:52.775370   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:52.775383   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:52.775395   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:54.777239   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 14
	I0906 12:40:54.777255   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:54.777298   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:54.778053   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:54.778127   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:54.778138   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:54.778148   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:54.778156   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:54.778165   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:54.778173   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:54.778183   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:54.778203   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:54.778210   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:54.778216   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:54.778222   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:54.778229   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:54.778235   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:54.778248   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:54.778262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:54.778271   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:54.778279   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:54.778286   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:54.778305   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:54.778313   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:54.778319   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:54.778338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:54.778351   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:54.778358   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:54.778366   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:54.778377   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:54.778383   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:54.778393   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:54.778402   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:54.778409   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:54.778417   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:54.778425   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:54.778433   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:54.778440   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:54.778450   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:54.778457   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:54.778471   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:54.778487   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:56.778807   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 15
	I0906 12:40:56.778823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:56.778892   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:56.779696   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:56.779729   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:56.779739   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:56.779771   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:56.779793   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:56.779805   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:56.779816   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:56.779827   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:56.779837   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:56.779847   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:56.779855   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:56.779862   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:56.779871   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:56.779882   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:56.779890   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:56.779897   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:56.779915   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:56.779924   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:56.779932   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:56.779940   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:56.779955   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:56.779968   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:56.779980   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:56.779986   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:56.779993   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:56.780003   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:56.780011   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:56.780019   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:56.780027   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:56.780035   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:56.780042   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:56.780050   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:56.780066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:56.780079   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:56.780096   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:56.780103   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:56.780111   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:56.780122   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:56.780138   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:40:58.781313   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 16
	I0906 12:40:58.781327   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:40:58.781391   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:40:58.782176   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:40:58.782227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:40:58.782246   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:40:58.782261   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:40:58.782274   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:40:58.782285   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:40:58.782291   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:40:58.782298   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:40:58.782303   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:40:58.782310   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:40:58.782327   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:40:58.782338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:40:58.782345   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:40:58.782362   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:40:58.782369   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:40:58.782383   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:40:58.782391   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:40:58.782399   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:40:58.782407   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:40:58.782414   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:40:58.782422   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:40:58.782429   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:40:58.782439   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:40:58.782446   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:40:58.782454   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:40:58.782464   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:40:58.782472   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:40:58.782478   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:40:58.782486   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:40:58.782501   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:40:58.782510   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:40:58.782517   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:40:58.782530   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:40:58.782547   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:40:58.782558   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:40:58.782567   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:40:58.782575   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:40:58.782582   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:40:58.782588   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:00.783918   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 17
	I0906 12:41:00.783934   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:00.784003   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:00.784772   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:00.784826   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:00.784841   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:00.784852   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:00.784870   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:00.784883   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:00.784893   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:00.784900   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:00.784906   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:00.784913   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:00.784920   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:00.784927   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:00.784935   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:00.784945   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:00.784952   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:00.784975   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:00.784990   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:00.785001   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:00.785020   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:00.785034   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:00.785044   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:00.785059   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:00.785068   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:00.785078   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:00.785086   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:00.785093   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:00.785103   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:00.785114   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:00.785126   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:00.785138   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:00.785145   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:00.785153   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:00.785169   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:00.785180   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:00.785189   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:00.785197   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:00.785203   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:00.785210   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:00.785217   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:02.787067   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 18
	I0906 12:41:02.787083   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:02.787143   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:02.787915   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:02.787979   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:02.787996   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:02.788008   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:02.788023   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:02.788031   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:02.788037   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:02.788057   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:02.788068   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:02.788076   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:02.788082   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:02.788088   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:02.788096   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:02.788104   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:02.788126   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:02.788141   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:02.788150   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:02.788159   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:02.788168   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:02.788174   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:02.788181   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:02.788187   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:02.788200   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:02.788212   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:02.788221   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:02.788227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:02.788247   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:02.788255   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:02.788270   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:02.788283   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:02.788298   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:02.788306   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:02.788314   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:02.788322   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:02.788329   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:02.788336   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:02.788348   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:02.788360   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:02.788370   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:04.790197   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 19
	I0906 12:41:04.790209   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:04.790265   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:04.791080   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:04.791106   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:04.791126   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:04.791133   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:04.791144   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:04.791159   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:04.791172   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:04.791180   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:04.791187   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:04.791200   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:04.791209   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:04.791217   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:04.791225   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:04.791235   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:04.791245   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:04.791254   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:04.791263   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:04.791270   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:04.791277   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:04.791286   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:04.791294   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:04.791308   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:04.791320   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:04.791328   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:04.791338   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:04.791351   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:04.791359   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:04.791365   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:04.791373   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:04.791379   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:04.791387   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:04.791394   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:04.791402   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:04.791417   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:04.791432   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:04.791441   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:04.791448   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:04.791456   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:04.791464   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:06.792482   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 20
	I0906 12:41:06.792505   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:06.792516   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:06.793289   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:06.793359   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:06.793371   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:06.793382   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:06.793388   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:06.793397   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:06.793410   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:06.793419   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:06.793426   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:06.793433   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:06.793446   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:06.793459   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:06.793474   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:06.793484   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:06.793499   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:06.793508   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:06.793517   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:06.793525   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:06.793533   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:06.793541   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:06.793548   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:06.793554   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:06.793561   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:06.793570   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:06.793583   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:06.793594   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:06.793601   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:06.793609   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:06.793628   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:06.793645   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:06.793660   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:06.793674   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:06.793681   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:06.793687   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:06.793694   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:06.793702   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:06.793709   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:06.793717   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:06.793725   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:08.795560   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 21
	I0906 12:41:08.795575   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:08.795644   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:08.796410   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:08.796482   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:08.796495   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:08.796504   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:08.796510   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:08.796519   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:08.796529   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:08.796545   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:08.796554   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:08.796566   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:08.796589   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:08.796604   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:08.796614   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:08.796627   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:08.796635   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:08.796641   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:08.796646   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:08.796654   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:08.796662   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:08.796669   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:08.796678   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:08.796685   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:08.796691   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:08.796697   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:08.796704   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:08.796713   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:08.796719   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:08.796727   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:08.796742   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:08.796754   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:08.796762   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:08.796768   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:08.796783   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:08.796792   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:08.796799   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:08.796806   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:08.796813   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:08.796821   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:08.796829   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:10.798687   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 22
	I0906 12:41:10.798699   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:10.798767   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:10.799717   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:10.799795   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:10.799804   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:10.799812   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:10.799818   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:10.799826   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:10.799832   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:10.799842   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:10.799848   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:10.799864   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:10.799874   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:10.799882   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:10.799889   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:10.799916   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:10.799927   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:10.799938   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:10.799946   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:10.799959   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:10.799967   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:10.799979   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:10.799988   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:10.799996   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:10.800004   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:10.800012   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:10.800018   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:10.800027   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:10.800035   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:10.800042   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:10.800050   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:10.800057   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:10.800064   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:10.800077   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:10.800084   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:10.800091   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:10.800098   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:10.800104   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:10.800111   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:10.800119   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:10.800128   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:12.800095   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 23
	I0906 12:41:12.800110   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:12.800204   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:12.801011   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:12.801082   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:12.801093   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:12.801108   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:12.801116   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:12.801123   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:12.801129   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:12.801141   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:12.801151   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:12.801164   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:12.801175   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:12.801182   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:12.801188   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:12.801195   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:12.801204   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:12.801215   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:12.801223   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:12.801231   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:12.801237   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:12.801244   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:12.801252   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:12.801258   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:12.801265   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:12.801271   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:12.801287   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:12.801300   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:12.801315   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:12.801328   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:12.801339   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:12.801348   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:12.801355   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:12.801362   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:12.801386   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:12.801398   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:12.801406   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:12.801414   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:12.801419   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:12.801427   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:12.801433   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:14.803090   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 24
	I0906 12:41:14.803106   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:14.803166   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:14.803968   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:14.804023   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:14.804037   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:14.804053   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:14.804066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:14.804074   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:14.804087   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:14.804095   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:14.804102   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:14.804109   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:14.804116   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:14.804122   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:14.804142   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:14.804159   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:14.804168   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:14.804177   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:14.804186   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:14.804193   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:14.804199   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:14.804205   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:14.804211   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:14.804224   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:14.804251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:14.804262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:14.804270   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:14.804278   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:14.804285   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:14.804291   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:14.804304   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:14.804316   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:14.804324   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:14.804331   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:14.804345   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:14.804364   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:14.804372   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:14.804381   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:14.804387   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:14.804400   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:14.804409   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:16.806249   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 25
	I0906 12:41:16.806262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:16.806325   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:16.807118   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:16.807163   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:16.807180   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:16.807190   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:16.807199   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:16.807213   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:16.807224   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:16.807238   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:16.807246   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:16.807258   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:16.807271   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:16.807284   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:16.807294   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:16.807302   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:16.807316   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:16.807324   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:16.807333   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:16.807341   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:16.807349   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:16.807362   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:16.807373   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:16.807388   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:16.807401   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:16.807411   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:16.807420   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:16.807427   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:16.807435   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:16.807443   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:16.807451   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:16.807457   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:16.807465   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:16.807471   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:16.807479   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:16.807489   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:16.807497   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:16.807504   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:16.807511   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:16.807526   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:16.807543   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:18.808065   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 26
	I0906 12:41:18.808081   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:18.808159   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:18.808961   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:18.808992   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:18.809000   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:18.809018   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:18.809028   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:18.809036   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:18.809043   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:18.809051   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:18.809057   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:18.809064   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:18.809075   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:18.809087   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:18.809095   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:18.809103   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:18.809110   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:18.809128   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:18.809139   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:18.809149   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:18.809157   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:18.809169   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:18.809178   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:18.809191   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:18.809204   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:18.809219   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:18.809231   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:18.809240   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:18.809248   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:18.809256   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:18.809262   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:18.809269   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:18.809277   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:18.809300   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:18.809316   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:18.809323   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:18.809333   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:18.809342   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:18.809351   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:18.809359   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:18.809367   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:20.811209   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 27
	I0906 12:41:20.811222   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:20.811299   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:20.812051   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:20.812164   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:20.812174   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:20.812185   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:20.812193   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:20.812205   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:20.812212   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:20.812219   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:20.812227   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:20.812235   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:20.812251   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:20.812266   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:20.812280   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:20.812294   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:20.812302   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:20.812310   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:20.812318   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:20.812325   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:20.812333   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:20.812342   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:20.812350   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:20.812360   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:20.812368   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:20.812375   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:20.812381   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:20.812387   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:20.812395   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:20.812403   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:20.812410   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:20.812417   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:20.812423   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:20.812428   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:20.812436   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:20.812444   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:20.812450   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:20.812457   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:20.812468   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:20.812476   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:20.812483   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:22.813781   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 28
	I0906 12:41:22.813795   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:22.813855   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:22.814634   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:22.814703   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:22.814717   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:22.814725   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:22.814733   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:22.814744   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:22.814755   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:22.814764   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:22.814780   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:22.814791   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:22.814801   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:22.814809   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:22.814823   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:22.814835   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:22.814853   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:22.814866   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:22.814874   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:22.814882   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:22.814889   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:22.814897   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:22.814909   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:22.814919   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:22.814926   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:22.814934   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:22.814941   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:22.814954   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:22.814964   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:22.814972   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:22.814981   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:22.814987   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:22.815003   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:22.815017   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:22.815025   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:22.815031   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:22.815039   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:22.815046   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:22.815054   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:22.815069   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:22.815083   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:24.814877   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Attempt 29
	I0906 12:41:24.814889   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:41:24.814942   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | hyperkit pid from json: 14222
	I0906 12:41:24.815701   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Searching for 52:d:e8:23:17:3 in /var/db/dhcpd_leases ...
	I0906 12:41:24.815760   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0906 12:41:24.815769   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:f6:98:f2:21:d2:4c ID:1,f6:98:f2:21:d2:4c Lease:0x66dcab05}
	I0906 12:41:24.815779   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:26:b1:20:d4:e0:0 ID:1,26:b1:20:d4:e0:0 Lease:0x66dcaa43}
	I0906 12:41:24.815802   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:32:29:ec:91:d1:ae ID:1,32:29:ec:91:d1:ae Lease:0x66db584e}
	I0906 12:41:24.815810   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:41:24.815817   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca90f}
	I0906 12:41:24.815825   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca8d3}
	I0906 12:41:24.815835   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:4f:ea:ce:d2:49 ID:1,56:4f:ea:ce:d2:49 Lease:0x66db546c}
	I0906 12:41:24.815842   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:42:20:8c:12:48:20 ID:1,42:20:8c:12:48:20 Lease:0x66dca5a8}
	I0906 12:41:24.815847   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:42:ac:8d:b2:50:a8 ID:1,42:ac:8d:b2:50:a8 Lease:0x66dca56c}
	I0906 12:41:24.815862   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:fe:5e:4f:9d:4:ce ID:1,fe:5e:4f:9d:4:ce Lease:0x66dca53f}
	I0906 12:41:24.815884   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ba:e6:61:25:70:3a ID:1,ba:e6:61:25:70:3a Lease:0x66dca4c8}
	I0906 12:41:24.815902   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db53ad}
	I0906 12:41:24.815915   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:41:24.815924   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:41:24.815932   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:41:24.815939   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:f6:28:37:46:d:43 ID:1,f6:28:37:46:d:43 Lease:0x66dca06e}
	I0906 12:41:24.815948   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:7e:35:14:3d:93:fc ID:1,7e:35:14:3d:93:fc Lease:0x66dc9fa8}
	I0906 12:41:24.815955   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:ae:ba:57:3d:b2:90 ID:1,ae:ba:57:3d:b2:90 Lease:0x66dc9b86}
	I0906 12:41:24.815963   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:b4:2c:88:50:23 ID:1,ce:b4:2c:88:50:23 Lease:0x66dc935c}
	I0906 12:41:24.815977   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:be:b4:d4:c7:dc:d ID:1,be:b4:d4:c7:dc:d Lease:0x66dc9279}
	I0906 12:41:24.815986   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:7a:a1:77:4a:a5:91 ID:1,7a:a1:77:4a:a5:91 Lease:0x66dc91c0}
	I0906 12:41:24.815993   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:de:5b:e5:58:6e:37 ID:1,de:5b:e5:58:6e:37 Lease:0x66db3fb5}
	I0906 12:41:24.816001   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:22:30:c:21:5:7e ID:1,22:30:c:21:5:7e Lease:0x66dc9193}
	I0906 12:41:24.816013   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:22:d5:86:cd:9b:f1 ID:1,22:d5:86:cd:9b:f1 Lease:0x66dc9151}
	I0906 12:41:24.816022   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:8e:5a:66:dd:18:f7 ID:1,8e:5a:66:dd:18:f7 Lease:0x66dc8e98}
	I0906 12:41:24.816029   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:a6:6b:40:a8:1c:f0 ID:1,a6:6b:40:a8:1c:f0 Lease:0x66dc8e72}
	I0906 12:41:24.816036   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:1e:5:92:bc:cb:c4 ID:1,1e:5:92:bc:cb:c4 Lease:0x66dc8e61}
	I0906 12:41:24.816044   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:4a:b7:b9:e7:93:ce ID:1,4a:b7:b9:e7:93:ce Lease:0x66dc8e06}
	I0906 12:41:24.816052   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:be:37:70:a5:5c:9e ID:1,be:37:70:a5:5c:9e Lease:0x66dc8dd3}
	I0906 12:41:24.816059   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:4:1e:42:6:c6 ID:1,92:4:1e:42:6:c6 Lease:0x66dc8d72}
	I0906 12:41:24.816066   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:7a:b0:c7:7c:c4:c9 ID:1,7a:b0:c7:7c:c4:c9 Lease:0x66dc8d32}
	I0906 12:41:24.816076   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:f2:90:76:52:e7:ec ID:1,f2:90:76:52:e7:ec Lease:0x66db3b1d}
	I0906 12:41:24.816083   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:c5:2:c1:f8:c6 ID:1,b2:c5:2:c1:f8:c6 Lease:0x66dc8cec}
	I0906 12:41:24.816090   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:6e:d5:13:98:90:6d ID:1,6e:d5:13:98:90:6d Lease:0x66dc8cc2}
	I0906 12:41:24.816097   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:3e:ee:75:ec:70:97 ID:1,3e:ee:75:ec:70:97 Lease:0x66dc8310}
	I0906 12:41:24.816124   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:32:c7:d0:78:28:2b ID:1,32:c7:d0:78:28:2b Lease:0x66db30ee}
	I0906 12:41:24.816139   14172 main.go:141] libmachine: (force-systemd-env-823000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a2:1f:fe:97:10:b5 ID:1,a2:1f:fe:97:10:b5 Lease:0x66dc7efb}
	I0906 12:41:26.816551   14172 client.go:171] duration metric: took 1m0.942747484s to LocalClient.Create
	I0906 12:41:28.818372   14172 start.go:128] duration metric: took 1m2.997005278s to createHost
	I0906 12:41:28.818387   14172 start.go:83] releasing machines lock for "force-systemd-env-823000", held for 1m2.997109317s
	W0906 12:41:28.818478   14172 out.go:270] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-823000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:d:e8:23:17:3
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-823000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:d:e8:23:17:3
	I0906 12:41:28.881809   14172 out.go:201] 
	W0906 12:41:28.903050   14172 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:d:e8:23:17:3
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 52:d:e8:23:17:3
	W0906 12:41:28.903067   14172 out.go:270] * 
	* 
	W0906 12:41:28.903720   14172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:41:28.965922   14172 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-823000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-823000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-823000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (195.154391ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-823000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-823000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-06 12:41:29.372696 -0700 PDT m=+4366.534954307
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-823000 -n force-systemd-env-823000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-823000 -n force-systemd-env-823000: exit status 7 (108.743343ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:41:29.454848   14246 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:41:29.454870   14246 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-823000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-823000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-823000: (5.240024255s)
--- FAIL: TestForceSystemdEnv (236.99s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-darwin-amd64 license: exit status 40 (246.343507ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_cp_4cc22f0c26ba081940c948971cd4fd47556791cb_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (148.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-343000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-343000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-343000 -v=7 --alsologtostderr: (27.13830157s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-343000 --wait=true -v=7 --alsologtostderr
E0906 12:00:57.315218    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-343000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m58.93129542s)

                                                
                                                
-- stdout --
	* [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	* Restarting existing hyperkit VM for "ha-343000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	* Enabled addons: 
	
	* Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	* Restarting existing hyperkit VM for "ha-343000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.24
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:00:13.694390   12094 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:00:13.694568   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694575   12094 out.go:358] Setting ErrFile to fd 2...
	I0906 12:00:13.694584   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694756   12094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:00:13.696524   12094 out.go:352] Setting JSON to false
	I0906 12:00:13.721080   12094 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10784,"bootTime":1725638429,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:00:13.721173   12094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:00:13.742655   12094 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:00:13.784492   12094 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:00:13.784545   12094 notify.go:220] Checking for updates...
	I0906 12:00:13.827582   12094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:13.848323   12094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:00:13.869497   12094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:00:13.890655   12094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:00:13.911464   12094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:00:13.933299   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:13.933473   12094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:00:13.934147   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:13.934226   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:13.943846   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56123
	I0906 12:00:13.944225   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:13.944638   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:13.944649   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:13.944842   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:13.944971   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:13.973620   12094 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:00:13.994428   12094 start.go:297] selected driver: hyperkit
	I0906 12:00:13.994464   12094 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:13.994699   12094 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:00:13.994893   12094 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:13.995108   12094 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:00:14.004848   12094 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:00:14.008700   12094 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.008720   12094 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:00:14.011904   12094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:00:14.011944   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:14.011950   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:14.012025   12094 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:14.012136   12094 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:14.054473   12094 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:00:14.075405   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:14.075507   12094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:00:14.075533   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:14.075741   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:14.075759   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:14.075970   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.076999   12094 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:14.077104   12094 start.go:364] duration metric: took 81.424µs to acquireMachinesLock for "ha-343000"
	I0906 12:00:14.077136   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:14.077155   12094 fix.go:54] fixHost starting: 
	I0906 12:00:14.077547   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.077578   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:14.086539   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56125
	I0906 12:00:14.086911   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:14.087275   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:14.087288   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:14.087499   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:14.087626   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.087742   12094 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:00:14.087847   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.087908   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 10421
	I0906 12:00:14.088810   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.088857   12094 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:00:14.088881   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:00:14.088974   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:14.130290   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:00:14.151187   12094 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:00:14.151341   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.151358   12094 main.go:141] libmachine: (ha-343000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:00:14.152544   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.152554   12094 main.go:141] libmachine: (ha-343000) DBG | pid 10421 is in state "Stopped"
	I0906 12:00:14.152567   12094 main.go:141] libmachine: (ha-343000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid...
	I0906 12:00:14.152736   12094 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:00:14.278050   12094 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:00:14.278072   12094 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:14.278193   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278238   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278268   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:14.278300   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:14.278328   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:14.279797   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Pid is 12107
	I0906 12:00:14.280167   12094 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:00:14.280184   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.280255   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:00:14.282230   12094 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:00:14.282307   12094 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:14.282320   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:14.282355   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:14.282372   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:00:14.282386   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca170}
	I0906 12:00:14.282393   12094 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:00:14.282401   12094 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:00:14.282427   12094 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:00:14.283073   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:14.283250   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.283690   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:14.283700   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.283812   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:14.283907   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:14.284012   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284129   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284231   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:14.284358   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:14.284630   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:14.284642   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:14.288262   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:14.344998   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:14.345710   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.345724   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.345740   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.345751   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.732607   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:14.732636   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:14.847834   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.847852   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.847864   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.847895   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.848717   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:14.848731   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:00:20.456737   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:00:20.456790   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:00:20.456799   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:00:20.482344   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:00:49.356770   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:00:49.356783   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.356936   12094 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:00:49.356945   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.357080   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.357164   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.357260   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357348   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357460   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.357608   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.357783   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.357791   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:00:49.434653   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:00:49.434670   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.434810   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.434910   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.434998   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.435076   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.435208   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.435362   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.435373   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:00:49.507101   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:00:49.507131   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:00:49.507148   12094 buildroot.go:174] setting up certificates
	I0906 12:00:49.507157   12094 provision.go:84] configureAuth start
	I0906 12:00:49.507164   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.507301   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:49.507386   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.507484   12094 provision.go:143] copyHostCerts
	I0906 12:00:49.507518   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.507591   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:00:49.507599   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.508035   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:00:49.508249   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508290   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:00:49.508295   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508374   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:00:49.508511   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508554   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:00:49.508560   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508641   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:00:49.508778   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:00:49.908537   12094 provision.go:177] copyRemoteCerts
	I0906 12:00:49.908600   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:00:49.908618   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.908766   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.908869   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.908969   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.909081   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:49.950319   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:00:49.950395   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:00:49.969178   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:00:49.969240   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0906 12:00:49.988087   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:00:49.988150   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:00:50.007042   12094 provision.go:87] duration metric: took 499.867022ms to configureAuth
	I0906 12:00:50.007055   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:00:50.007239   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:50.007254   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:50.007383   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.007480   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.007568   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007658   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007737   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.007851   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.007970   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.007977   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:00:50.074324   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:00:50.074334   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:00:50.074409   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:00:50.074422   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.074584   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.074695   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074789   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074892   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.075030   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.075178   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.075221   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:00:50.150993   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:00:50.151016   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.151152   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.151245   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151341   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151440   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.151557   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.151697   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.151709   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:00:51.817119   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:00:51.817133   12094 machine.go:96] duration metric: took 37.533362432s to provisionDockerMachine
	I0906 12:00:51.817147   12094 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:00:51.817155   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:00:51.817165   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.817341   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:00:51.817358   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.817453   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.817539   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.817633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.817710   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.857455   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:00:51.860581   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:00:51.860594   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:00:51.860691   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:00:51.860881   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:00:51.860887   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:00:51.861099   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:00:51.869229   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:51.888403   12094 start.go:296] duration metric: took 71.247262ms for postStartSetup
	I0906 12:00:51.888426   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.888596   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:00:51.888609   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.888701   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.888782   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.889409   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.889522   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.930243   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:00:51.930305   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:00:51.984449   12094 fix.go:56] duration metric: took 37.907224883s for fixHost
	I0906 12:00:51.984473   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.984633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.984732   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984820   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984909   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.985037   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:51.985190   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:51.985198   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:00:52.050855   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649252.136473627
	
	I0906 12:00:52.050870   12094 fix.go:216] guest clock: 1725649252.136473627
	I0906 12:00:52.050876   12094 fix.go:229] Guest: 2024-09-06 12:00:52.136473627 -0700 PDT Remote: 2024-09-06 12:00:51.984463 -0700 PDT m=+38.325391256 (delta=152.010627ms)
	I0906 12:00:52.050893   12094 fix.go:200] guest clock delta is within tolerance: 152.010627ms
	I0906 12:00:52.050897   12094 start.go:83] releasing machines lock for "ha-343000", held for 37.97370768s
	I0906 12:00:52.050919   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051055   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:52.051151   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051468   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051587   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051648   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:00:52.051681   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051732   12094 ssh_runner.go:195] Run: cat /version.json
	I0906 12:00:52.051743   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051763   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051867   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.051920   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051954   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052063   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.052085   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.052169   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052247   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.086438   12094 ssh_runner.go:195] Run: systemctl --version
	I0906 12:00:52.137495   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:00:52.142191   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:00:52.142231   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:00:52.154446   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:00:52.154458   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.154552   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.172091   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:00:52.181012   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:00:52.190031   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.190079   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:00:52.199064   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.207848   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:00:52.216656   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.225515   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:00:52.234566   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:00:52.243255   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:00:52.252029   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:00:52.260858   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:00:52.268821   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:00:52.276765   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.377515   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:00:52.394471   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.394552   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:00:52.407063   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.418612   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:00:52.433923   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.444946   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.455717   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:00:52.478561   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.492332   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.507486   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:00:52.510450   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:00:52.518207   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:00:52.531443   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:00:52.631849   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:00:52.738034   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.738112   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:00:52.751782   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.847435   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:00:55.174969   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327505108s)
	I0906 12:00:55.175030   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:00:55.186551   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.197381   12094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:00:55.299777   12094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:00:55.398609   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.498794   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:00:55.512395   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.523922   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.617484   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:00:55.684124   12094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:00:55.684200   12094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:00:55.688892   12094 start.go:563] Will wait 60s for crictl version
	I0906 12:00:55.688940   12094 ssh_runner.go:195] Run: which crictl
	I0906 12:00:55.692913   12094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:00:55.719238   12094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:00:55.719311   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.738356   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.778738   12094 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:00:55.778787   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:55.779172   12094 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:00:55.783863   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.794970   12094 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:00:55.795055   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:55.795104   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.809713   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.809724   12094 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:00:55.809795   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.823764   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.823788   12094 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:00:55.823798   12094 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:00:55.823893   12094 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:00:55.823968   12094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:00:55.861417   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:55.861428   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:55.861437   12094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:00:55.861452   12094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:00:55.861532   12094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:00:55.861545   12094 kube-vip.go:115] generating kube-vip config ...
	I0906 12:00:55.861593   12094 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:00:55.875047   12094 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:00:55.875114   12094 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:00:55.875172   12094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:00:55.890674   12094 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:00:55.890728   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:00:55.898141   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:00:55.911696   12094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:00:55.925468   12094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:00:55.940252   12094 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:00:55.953658   12094 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:00:55.956513   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.965807   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:56.068757   12094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:00:56.082925   12094 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:00:56.082937   12094 certs.go:194] generating shared ca certs ...
	I0906 12:00:56.082949   12094 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.083129   12094 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:00:56.083206   12094 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:00:56.083216   12094 certs.go:256] generating profile certs ...
	I0906 12:00:56.083325   12094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:00:56.083344   12094 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:00:56.083361   12094 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.24 192.169.0.25 192.169.0.26 192.169.0.254]
	I0906 12:00:56.334331   12094 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 ...
	I0906 12:00:56.334349   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57: {Name:mke69baf11a7ce9368028746c3ea673d595b5389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.334927   12094 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 ...
	I0906 12:00:56.334938   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57: {Name:mk818d10389922964dda91749efae3a655d8f5d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.335204   12094 certs.go:381] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt
	I0906 12:00:56.335461   12094 certs.go:385] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key
	I0906 12:00:56.335705   12094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:00:56.335715   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:00:56.335738   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:00:56.335758   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:00:56.335778   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:00:56.335796   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:00:56.335815   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:00:56.335833   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:00:56.335852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:00:56.335940   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:00:56.335991   12094 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:00:56.335999   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:00:56.336041   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:00:56.336081   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:00:56.336121   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:00:56.336206   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:56.336250   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.336272   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.336292   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.336712   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:00:56.388979   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:00:56.414966   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:00:56.439775   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:00:56.466208   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:00:56.492195   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:00:56.512216   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:00:56.532441   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:00:56.552158   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:00:56.571661   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:00:56.591148   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:00:56.610631   12094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:00:56.624148   12094 ssh_runner.go:195] Run: openssl version
	I0906 12:00:56.628419   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:00:56.636965   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640480   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640510   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.644827   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:00:56.653067   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:00:56.661485   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665034   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665069   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.669468   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:00:56.677956   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:00:56.686368   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689913   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689948   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.694107   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:00:56.702602   12094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:00:56.706177   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:00:56.711002   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:00:56.715284   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:00:56.720202   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:00:56.724667   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:00:56.728981   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:00:56.733338   12094 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:56.733444   12094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:00:56.746587   12094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:00:56.754476   12094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:00:56.754485   12094 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:00:56.754526   12094 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:00:56.762271   12094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:00:56.762575   12094 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.762661   12094 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:00:56.762831   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.763230   12094 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.763419   12094 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf24aae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:00:56.763713   12094 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:00:56.763884   12094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:00:56.771199   12094 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:00:56.771211   12094 kubeadm.go:597] duration metric: took 16.721202ms to restartPrimaryControlPlane
	I0906 12:00:56.771216   12094 kubeadm.go:394] duration metric: took 37.882882ms to StartCluster
	I0906 12:00:56.771224   12094 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771295   12094 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.771611   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771827   12094 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:00:56.771840   12094 start.go:241] waiting for startup goroutines ...
	I0906 12:00:56.771853   12094 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:00:56.771974   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.815354   12094 out.go:177] * Enabled addons: 
	I0906 12:00:56.836135   12094 addons.go:510] duration metric: took 64.272275ms for enable addons: enabled=[]
	I0906 12:00:56.836233   12094 start.go:246] waiting for cluster config update ...
	I0906 12:00:56.836259   12094 start.go:255] writing updated cluster config ...
	I0906 12:00:56.858430   12094 out.go:201] 
	I0906 12:00:56.879711   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.879825   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.901995   12094 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:00:56.944141   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:56.944200   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:56.944408   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:56.944427   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:56.944549   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.945615   12094 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:56.945736   12094 start.go:364] duration metric: took 97.464µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:00:56.945762   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:56.945772   12094 fix.go:54] fixHost starting: m02
	I0906 12:00:56.946173   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:56.946201   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:56.955570   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56147
	I0906 12:00:56.955905   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:56.956247   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:56.956263   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:56.956475   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:56.956595   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:56.956699   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:00:56.956773   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:56.956871   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 10914
	I0906 12:00:56.957763   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:56.957792   12094 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:00:56.957800   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:00:56.957882   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:57.000302   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:00:57.021304   12094 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:00:57.021585   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.021622   12094 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:00:57.022935   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:57.022948   12094 main.go:141] libmachine: (ha-343000-m02) DBG | pid 10914 is in state "Stopped"
	I0906 12:00:57.023011   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:00:57.023381   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:00:57.049902   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:00:57.049929   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:57.050062   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050089   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050146   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:57.050177   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:57.050183   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:57.051588   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Pid is 12118
	I0906 12:00:57.051949   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:00:57.051968   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.052042   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:00:57.054138   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:00:57.054208   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:57.054228   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca2c7}
	I0906 12:00:57.054254   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:57.054281   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:57.054300   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:00:57.054322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:00:57.054328   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:00:57.054969   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:00:57.055183   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:57.055671   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:57.055682   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:57.055826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:00:57.055916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:00:57.056038   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056169   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056275   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:00:57.056401   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:57.056636   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:00:57.056647   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:57.059445   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:57.069382   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:57.070322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.070335   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.070343   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.070352   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.458835   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:57.458851   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:57.573579   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.573599   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.573609   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.573621   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.574503   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:57.574513   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:01:03.177947   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:01:03.178017   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:01:03.178029   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:01:03.201747   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:01:08.125551   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:01:08.125569   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125712   12094 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:01:08.125723   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125829   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.125916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.126006   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126090   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126176   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.126310   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.126460   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.126470   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:01:08.196553   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:01:08.196570   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.196738   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.196849   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.196938   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.197031   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.197164   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.197302   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.197315   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:01:08.265441   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:01:08.265457   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:01:08.265466   12094 buildroot.go:174] setting up certificates
	I0906 12:01:08.265473   12094 provision.go:84] configureAuth start
	I0906 12:01:08.265479   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.265616   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:08.265727   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.265818   12094 provision.go:143] copyHostCerts
	I0906 12:01:08.265852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.265899   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:01:08.265905   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.266042   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:01:08.266231   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266259   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:01:08.266263   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266340   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:01:08.266475   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266502   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:01:08.266507   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266580   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:01:08.266719   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:01:08.411000   12094 provision.go:177] copyRemoteCerts
	I0906 12:01:08.411052   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:01:08.411067   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.411204   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.411300   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.411401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.411487   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:08.448748   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:01:08.448826   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:01:08.467690   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:01:08.467754   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:01:08.486653   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:01:08.486713   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:01:08.505720   12094 provision.go:87] duration metric: took 240.238536ms to configureAuth
	I0906 12:01:08.505733   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:01:08.505898   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:01:08.505912   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:08.506045   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.506132   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.506232   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506324   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.506529   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.506694   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.506702   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:01:08.568618   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:01:08.568634   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:01:08.568774   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:01:08.568789   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.568942   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.569034   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569216   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.569394   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.569538   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.569591   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:01:08.641655   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:01:08.641670   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.641797   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.641898   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.641987   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.642088   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.642231   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.642380   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.642393   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:01:10.295573   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:01:10.295588   12094 machine.go:96] duration metric: took 13.23988234s to provisionDockerMachine
	I0906 12:01:10.295597   12094 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:01:10.295605   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:01:10.295615   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.295802   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:01:10.295816   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.295925   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.296020   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.296104   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.296195   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.338012   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:01:10.342178   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:01:10.342189   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:01:10.342305   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:01:10.342480   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:01:10.342486   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:01:10.342677   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:01:10.352005   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:01:10.386594   12094 start.go:296] duration metric: took 90.988002ms for postStartSetup
	I0906 12:01:10.386614   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.387260   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:01:10.387299   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.387908   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.388016   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.388130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.388217   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.425216   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:01:10.425274   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:01:10.478532   12094 fix.go:56] duration metric: took 13.532732174s for fixHost
	I0906 12:01:10.478558   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.478717   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.478826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.478930   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.479017   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.479147   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:10.479284   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:10.479291   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:01:10.540605   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649270.629925942
	
	I0906 12:01:10.540619   12094 fix.go:216] guest clock: 1725649270.629925942
	I0906 12:01:10.540624   12094 fix.go:229] Guest: 2024-09-06 12:01:10.629925942 -0700 PDT Remote: 2024-09-06 12:01:10.478547 -0700 PDT m=+56.819439281 (delta=151.378942ms)
	I0906 12:01:10.540635   12094 fix.go:200] guest clock delta is within tolerance: 151.378942ms
	I0906 12:01:10.540639   12094 start.go:83] releasing machines lock for "ha-343000-m02", held for 13.594865643s
	I0906 12:01:10.540654   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.540778   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:10.562345   12094 out.go:177] * Found network options:
	I0906 12:01:10.583938   12094 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:01:10.604860   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.604892   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605507   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605705   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605840   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:01:10.605876   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:01:10.605977   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.606085   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:01:10.606085   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606109   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.606320   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606351   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606520   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606538   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606733   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.606776   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606943   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:01:10.641836   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:01:10.641895   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:01:10.688301   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:01:10.688319   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.688399   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:10.704168   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:01:10.713221   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:01:10.722234   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:01:10.722279   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:01:10.731269   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.740159   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:01:10.749214   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.758175   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:01:10.767634   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:01:10.776683   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:01:10.785787   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:01:10.794766   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:01:10.803033   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:01:10.811174   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:10.907940   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:01:10.926633   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.926708   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:01:10.940259   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.957368   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:01:10.981430   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.994068   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.004477   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:01:11.026305   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.036854   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:11.051822   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:01:11.054832   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:01:11.062232   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:01:11.076011   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:01:11.171774   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:01:11.275110   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:01:11.275140   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:01:11.288936   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:11.387536   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:02:12.406129   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.018456537s)
	I0906 12:02:12.406196   12094 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:02:12.441627   12094 out.go:201] 
	W0906 12:02:12.462568   12094 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:01:09 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.043870308Z" level=info msg="Starting up"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044354837Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044967157Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.060420044Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076676910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076721339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076763510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076773987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076859504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076892444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077013033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077047570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077059390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077066478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077150509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077343912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078819720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078854498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078962243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078995587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079114046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079161625Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.080994591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081080643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081116220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081128366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081138130Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081232741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081450103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081587892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081629697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081643361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081652352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081661711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081669570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081678662Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081687446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081695440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081703002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081710262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081725308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081734548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081742314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081750339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081759393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081767473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081774660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081782278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081789971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081798862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081806711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081823704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081834097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081843649Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081857316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081865237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081872738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081916471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081929926Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081937399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081945271Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081951561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081959071Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081965521Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082596203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082656975Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082684672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082843808Z" level=info msg="containerd successfully booted in 0.023145s"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.061791246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.091753353Z" level=info msg="Loading containers: start."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.248274667Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.308626646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.351686229Z" level=info msg="Loading containers: done."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359245186Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359419132Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.381469858Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:01:10 ha-343000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.384079790Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.489514557Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490667952Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490928769Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491093022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491132226Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:01:11 ha-343000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 dockerd[1161]: time="2024-09-06T19:01:12.525113343Z" level=info msg="Starting up"
	Sep 06 19:02:12 ha-343000-m02 dockerd[1161]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:01:09 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.043870308Z" level=info msg="Starting up"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044354837Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044967157Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.060420044Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076676910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076721339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076763510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076773987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076859504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076892444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077013033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077047570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077059390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077066478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077150509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077343912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078819720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078854498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078962243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078995587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079114046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079161625Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.080994591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081080643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081116220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081128366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081138130Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081232741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081450103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081587892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081629697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081643361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081652352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081661711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081669570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081678662Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081687446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081695440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081703002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081710262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081725308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081734548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081742314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081750339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081759393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081767473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081774660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081782278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081789971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081798862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081806711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081823704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081834097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081843649Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081857316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081865237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081872738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081916471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081929926Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081937399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081945271Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081951561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081959071Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081965521Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082596203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082656975Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082684672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082843808Z" level=info msg="containerd successfully booted in 0.023145s"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.061791246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.091753353Z" level=info msg="Loading containers: start."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.248274667Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.308626646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.351686229Z" level=info msg="Loading containers: done."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359245186Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359419132Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.381469858Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:01:10 ha-343000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.384079790Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.489514557Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490667952Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490928769Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491093022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491132226Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:01:11 ha-343000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 dockerd[1161]: time="2024-09-06T19:01:12.525113343Z" level=info msg="Starting up"
	Sep 06 19:02:12 ha-343000-m02 dockerd[1161]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:02:12.462663   12094 out.go:270] * 
	* 
	W0906 12:02:12.463787   12094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:02:12.526575   12094 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-343000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-343000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000: exit status 2 (151.489922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (2.154305793s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m03_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:00:13
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:00:13.694390   12094 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:00:13.694568   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694575   12094 out.go:358] Setting ErrFile to fd 2...
	I0906 12:00:13.694584   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694756   12094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:00:13.696524   12094 out.go:352] Setting JSON to false
	I0906 12:00:13.721080   12094 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10784,"bootTime":1725638429,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:00:13.721173   12094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:00:13.742655   12094 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:00:13.784492   12094 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:00:13.784545   12094 notify.go:220] Checking for updates...
	I0906 12:00:13.827582   12094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:13.848323   12094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:00:13.869497   12094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:00:13.890655   12094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:00:13.911464   12094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:00:13.933299   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:13.933473   12094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:00:13.934147   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:13.934226   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:13.943846   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56123
	I0906 12:00:13.944225   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:13.944638   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:13.944649   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:13.944842   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:13.944971   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:13.973620   12094 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:00:13.994428   12094 start.go:297] selected driver: hyperkit
	I0906 12:00:13.994464   12094 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:13.994699   12094 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:00:13.994893   12094 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:13.995108   12094 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:00:14.004848   12094 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:00:14.008700   12094 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.008720   12094 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:00:14.011904   12094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:00:14.011944   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:14.011950   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:14.012025   12094 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:14.012136   12094 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:14.054473   12094 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:00:14.075405   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:14.075507   12094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:00:14.075533   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:14.075741   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:14.075759   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:14.075970   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.076999   12094 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:14.077104   12094 start.go:364] duration metric: took 81.424µs to acquireMachinesLock for "ha-343000"
	I0906 12:00:14.077136   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:14.077155   12094 fix.go:54] fixHost starting: 
	I0906 12:00:14.077547   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.077578   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:14.086539   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56125
	I0906 12:00:14.086911   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:14.087275   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:14.087288   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:14.087499   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:14.087626   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.087742   12094 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:00:14.087847   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.087908   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 10421
	I0906 12:00:14.088810   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.088857   12094 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:00:14.088881   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:00:14.088974   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:14.130290   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:00:14.151187   12094 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:00:14.151341   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.151358   12094 main.go:141] libmachine: (ha-343000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:00:14.152544   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.152554   12094 main.go:141] libmachine: (ha-343000) DBG | pid 10421 is in state "Stopped"
	I0906 12:00:14.152567   12094 main.go:141] libmachine: (ha-343000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid...
	I0906 12:00:14.152736   12094 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:00:14.278050   12094 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:00:14.278072   12094 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:14.278193   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278238   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278268   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:14.278300   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:14.278328   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:14.279797   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Pid is 12107
	I0906 12:00:14.280167   12094 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:00:14.280184   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.280255   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:00:14.282230   12094 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:00:14.282307   12094 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:14.282320   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:14.282355   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:14.282372   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:00:14.282386   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca170}
	I0906 12:00:14.282393   12094 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:00:14.282401   12094 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:00:14.282427   12094 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:00:14.283073   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:14.283250   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.283690   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:14.283700   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.283812   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:14.283907   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:14.284012   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284129   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284231   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:14.284358   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:14.284630   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:14.284642   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:14.288262   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:14.344998   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:14.345710   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.345724   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.345740   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.345751   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.732607   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:14.732636   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:14.847834   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.847852   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.847864   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.847895   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.848717   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:14.848731   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:00:20.456737   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:00:20.456790   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:00:20.456799   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:00:20.482344   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:00:49.356770   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:00:49.356783   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.356936   12094 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:00:49.356945   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.357080   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.357164   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.357260   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357348   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357460   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.357608   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.357783   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.357791   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:00:49.434653   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:00:49.434670   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.434810   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.434910   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.434998   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.435076   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.435208   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.435362   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.435373   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:00:49.507101   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:00:49.507131   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:00:49.507148   12094 buildroot.go:174] setting up certificates
	I0906 12:00:49.507157   12094 provision.go:84] configureAuth start
	I0906 12:00:49.507164   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.507301   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:49.507386   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.507484   12094 provision.go:143] copyHostCerts
	I0906 12:00:49.507518   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.507591   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:00:49.507599   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.508035   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:00:49.508249   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508290   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:00:49.508295   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508374   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:00:49.508511   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508554   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:00:49.508560   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508641   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:00:49.508778   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:00:49.908537   12094 provision.go:177] copyRemoteCerts
	I0906 12:00:49.908600   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:00:49.908618   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.908766   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.908869   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.908969   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.909081   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:49.950319   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:00:49.950395   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:00:49.969178   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:00:49.969240   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0906 12:00:49.988087   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:00:49.988150   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:00:50.007042   12094 provision.go:87] duration metric: took 499.867022ms to configureAuth
	I0906 12:00:50.007055   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:00:50.007239   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:50.007254   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:50.007383   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.007480   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.007568   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007658   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007737   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.007851   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.007970   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.007977   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:00:50.074324   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:00:50.074334   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:00:50.074409   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:00:50.074422   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.074584   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.074695   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074789   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074892   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.075030   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.075178   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.075221   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:00:50.150993   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:00:50.151016   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.151152   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.151245   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151341   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151440   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.151557   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.151697   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.151709   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:00:51.817119   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:00:51.817133   12094 machine.go:96] duration metric: took 37.533362432s to provisionDockerMachine
	I0906 12:00:51.817147   12094 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:00:51.817155   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:00:51.817165   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.817341   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:00:51.817358   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.817453   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.817539   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.817633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.817710   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.857455   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:00:51.860581   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:00:51.860594   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:00:51.860691   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:00:51.860881   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:00:51.860887   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:00:51.861099   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:00:51.869229   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:51.888403   12094 start.go:296] duration metric: took 71.247262ms for postStartSetup
	I0906 12:00:51.888426   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.888596   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:00:51.888609   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.888701   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.888782   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.889409   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.889522   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.930243   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:00:51.930305   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:00:51.984449   12094 fix.go:56] duration metric: took 37.907224883s for fixHost
	I0906 12:00:51.984473   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.984633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.984732   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984820   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984909   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.985037   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:51.985190   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:51.985198   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:00:52.050855   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649252.136473627
	
	I0906 12:00:52.050870   12094 fix.go:216] guest clock: 1725649252.136473627
	I0906 12:00:52.050876   12094 fix.go:229] Guest: 2024-09-06 12:00:52.136473627 -0700 PDT Remote: 2024-09-06 12:00:51.984463 -0700 PDT m=+38.325391256 (delta=152.010627ms)
	I0906 12:00:52.050893   12094 fix.go:200] guest clock delta is within tolerance: 152.010627ms
	I0906 12:00:52.050897   12094 start.go:83] releasing machines lock for "ha-343000", held for 37.97370768s
	I0906 12:00:52.050919   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051055   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:52.051151   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051468   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051587   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051648   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:00:52.051681   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051732   12094 ssh_runner.go:195] Run: cat /version.json
	I0906 12:00:52.051743   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051763   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051867   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.051920   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051954   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052063   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.052085   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.052169   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052247   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.086438   12094 ssh_runner.go:195] Run: systemctl --version
	I0906 12:00:52.137495   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:00:52.142191   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:00:52.142231   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:00:52.154446   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:00:52.154458   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.154552   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.172091   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:00:52.181012   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:00:52.190031   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.190079   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:00:52.199064   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.207848   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:00:52.216656   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.225515   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:00:52.234566   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:00:52.243255   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:00:52.252029   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:00:52.260858   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:00:52.268821   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:00:52.276765   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.377515   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:00:52.394471   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.394552   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:00:52.407063   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.418612   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:00:52.433923   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.444946   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.455717   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:00:52.478561   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.492332   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.507486   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:00:52.510450   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:00:52.518207   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:00:52.531443   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:00:52.631849   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:00:52.738034   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.738112   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:00:52.751782   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.847435   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:00:55.174969   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327505108s)
	I0906 12:00:55.175030   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:00:55.186551   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.197381   12094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:00:55.299777   12094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:00:55.398609   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.498794   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:00:55.512395   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.523922   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.617484   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:00:55.684124   12094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:00:55.684200   12094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:00:55.688892   12094 start.go:563] Will wait 60s for crictl version
	I0906 12:00:55.688940   12094 ssh_runner.go:195] Run: which crictl
	I0906 12:00:55.692913   12094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:00:55.719238   12094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:00:55.719311   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.738356   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.778738   12094 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:00:55.778787   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:55.779172   12094 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:00:55.783863   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.794970   12094 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:00:55.795055   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:55.795104   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.809713   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.809724   12094 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:00:55.809795   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.823764   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.823788   12094 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:00:55.823798   12094 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:00:55.823893   12094 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:00:55.823968   12094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:00:55.861417   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:55.861428   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:55.861437   12094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:00:55.861452   12094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:00:55.861532   12094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:00:55.861545   12094 kube-vip.go:115] generating kube-vip config ...
	I0906 12:00:55.861593   12094 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:00:55.875047   12094 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:00:55.875114   12094 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:00:55.875172   12094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:00:55.890674   12094 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:00:55.890728   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:00:55.898141   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:00:55.911696   12094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:00:55.925468   12094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:00:55.940252   12094 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:00:55.953658   12094 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:00:55.956513   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.965807   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:56.068757   12094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:00:56.082925   12094 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:00:56.082937   12094 certs.go:194] generating shared ca certs ...
	I0906 12:00:56.082949   12094 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.083129   12094 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:00:56.083206   12094 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:00:56.083216   12094 certs.go:256] generating profile certs ...
	I0906 12:00:56.083325   12094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:00:56.083344   12094 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:00:56.083361   12094 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.24 192.169.0.25 192.169.0.26 192.169.0.254]
	I0906 12:00:56.334331   12094 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 ...
	I0906 12:00:56.334349   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57: {Name:mke69baf11a7ce9368028746c3ea673d595b5389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.334927   12094 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 ...
	I0906 12:00:56.334938   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57: {Name:mk818d10389922964dda91749efae3a655d8f5d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.335204   12094 certs.go:381] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt
	I0906 12:00:56.335461   12094 certs.go:385] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key
	I0906 12:00:56.335705   12094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:00:56.335715   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:00:56.335738   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:00:56.335758   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:00:56.335778   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:00:56.335796   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:00:56.335815   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:00:56.335833   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:00:56.335852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:00:56.335940   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:00:56.335991   12094 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:00:56.335999   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:00:56.336041   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:00:56.336081   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:00:56.336121   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:00:56.336206   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:56.336250   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.336272   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.336292   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.336712   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:00:56.388979   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:00:56.414966   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:00:56.439775   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:00:56.466208   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:00:56.492195   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:00:56.512216   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:00:56.532441   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:00:56.552158   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:00:56.571661   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:00:56.591148   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:00:56.610631   12094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:00:56.624148   12094 ssh_runner.go:195] Run: openssl version
	I0906 12:00:56.628419   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:00:56.636965   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640480   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640510   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.644827   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:00:56.653067   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:00:56.661485   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665034   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665069   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.669468   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:00:56.677956   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:00:56.686368   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689913   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689948   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.694107   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:00:56.702602   12094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:00:56.706177   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:00:56.711002   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:00:56.715284   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:00:56.720202   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:00:56.724667   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:00:56.728981   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:00:56.733338   12094 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:56.733444   12094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:00:56.746587   12094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:00:56.754476   12094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:00:56.754485   12094 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:00:56.754526   12094 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:00:56.762271   12094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:00:56.762575   12094 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.762661   12094 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:00:56.762831   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.763230   12094 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.763419   12094 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf24aae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:00:56.763713   12094 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:00:56.763884   12094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:00:56.771199   12094 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:00:56.771211   12094 kubeadm.go:597] duration metric: took 16.721202ms to restartPrimaryControlPlane
	I0906 12:00:56.771216   12094 kubeadm.go:394] duration metric: took 37.882882ms to StartCluster
	I0906 12:00:56.771224   12094 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771295   12094 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.771611   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771827   12094 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:00:56.771840   12094 start.go:241] waiting for startup goroutines ...
	I0906 12:00:56.771853   12094 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:00:56.771974   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.815354   12094 out.go:177] * Enabled addons: 
	I0906 12:00:56.836135   12094 addons.go:510] duration metric: took 64.272275ms for enable addons: enabled=[]
	I0906 12:00:56.836233   12094 start.go:246] waiting for cluster config update ...
	I0906 12:00:56.836259   12094 start.go:255] writing updated cluster config ...
	I0906 12:00:56.858430   12094 out.go:201] 
	I0906 12:00:56.879711   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.879825   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.901995   12094 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:00:56.944141   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:56.944200   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:56.944408   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:56.944427   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:56.944549   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.945615   12094 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:56.945736   12094 start.go:364] duration metric: took 97.464µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:00:56.945762   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:56.945772   12094 fix.go:54] fixHost starting: m02
	I0906 12:00:56.946173   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:56.946201   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:56.955570   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56147
	I0906 12:00:56.955905   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:56.956247   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:56.956263   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:56.956475   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:56.956595   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:56.956699   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:00:56.956773   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:56.956871   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 10914
	I0906 12:00:56.957763   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:56.957792   12094 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:00:56.957800   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:00:56.957882   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:57.000302   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:00:57.021304   12094 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:00:57.021585   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.021622   12094 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:00:57.022935   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:57.022948   12094 main.go:141] libmachine: (ha-343000-m02) DBG | pid 10914 is in state "Stopped"
	I0906 12:00:57.023011   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:00:57.023381   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:00:57.049902   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:00:57.049929   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:57.050062   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050089   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050146   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:57.050177   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:57.050183   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:57.051588   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Pid is 12118
	I0906 12:00:57.051949   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:00:57.051968   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.052042   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:00:57.054138   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:00:57.054208   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:57.054228   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca2c7}
	I0906 12:00:57.054254   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:57.054281   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:57.054300   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:00:57.054322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:00:57.054328   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:00:57.054969   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:00:57.055183   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:57.055671   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:57.055682   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:57.055826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:00:57.055916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:00:57.056038   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056169   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056275   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:00:57.056401   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:57.056636   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:00:57.056647   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:57.059445   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:57.069382   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:57.070322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.070335   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.070343   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.070352   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.458835   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:57.458851   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:57.573579   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.573599   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.573609   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.573621   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.574503   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:57.574513   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:01:03.177947   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:01:03.178017   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:01:03.178029   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:01:03.201747   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:01:08.125551   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:01:08.125569   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125712   12094 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:01:08.125723   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125829   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.125916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.126006   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126090   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126176   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.126310   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.126460   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.126470   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:01:08.196553   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:01:08.196570   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.196738   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.196849   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.196938   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.197031   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.197164   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.197302   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.197315   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:01:08.265441   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:01:08.265457   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:01:08.265466   12094 buildroot.go:174] setting up certificates
	I0906 12:01:08.265473   12094 provision.go:84] configureAuth start
	I0906 12:01:08.265479   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.265616   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:08.265727   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.265818   12094 provision.go:143] copyHostCerts
	I0906 12:01:08.265852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.265899   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:01:08.265905   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.266042   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:01:08.266231   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266259   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:01:08.266263   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266340   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:01:08.266475   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266502   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:01:08.266507   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266580   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:01:08.266719   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:01:08.411000   12094 provision.go:177] copyRemoteCerts
	I0906 12:01:08.411052   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:01:08.411067   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.411204   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.411300   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.411401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.411487   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:08.448748   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:01:08.448826   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:01:08.467690   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:01:08.467754   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:01:08.486653   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:01:08.486713   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:01:08.505720   12094 provision.go:87] duration metric: took 240.238536ms to configureAuth
	I0906 12:01:08.505733   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:01:08.505898   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:01:08.505912   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:08.506045   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.506132   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.506232   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506324   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.506529   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.506694   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.506702   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:01:08.568618   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:01:08.568634   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:01:08.568774   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:01:08.568789   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.568942   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.569034   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569216   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.569394   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.569538   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.569591   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:01:08.641655   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:01:08.641670   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.641797   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.641898   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.641987   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.642088   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.642231   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.642380   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.642393   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:01:10.295573   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:01:10.295588   12094 machine.go:96] duration metric: took 13.23988234s to provisionDockerMachine
	I0906 12:01:10.295597   12094 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:01:10.295605   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:01:10.295615   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.295802   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:01:10.295816   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.295925   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.296020   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.296104   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.296195   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.338012   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:01:10.342178   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:01:10.342189   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:01:10.342305   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:01:10.342480   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:01:10.342486   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:01:10.342677   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:01:10.352005   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:01:10.386594   12094 start.go:296] duration metric: took 90.988002ms for postStartSetup
	I0906 12:01:10.386614   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.387260   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:01:10.387299   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.387908   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.388016   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.388130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.388217   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.425216   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:01:10.425274   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:01:10.478532   12094 fix.go:56] duration metric: took 13.532732174s for fixHost
	I0906 12:01:10.478558   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.478717   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.478826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.478930   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.479017   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.479147   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:10.479284   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:10.479291   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:01:10.540605   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649270.629925942
	
	I0906 12:01:10.540619   12094 fix.go:216] guest clock: 1725649270.629925942
	I0906 12:01:10.540624   12094 fix.go:229] Guest: 2024-09-06 12:01:10.629925942 -0700 PDT Remote: 2024-09-06 12:01:10.478547 -0700 PDT m=+56.819439281 (delta=151.378942ms)
	I0906 12:01:10.540635   12094 fix.go:200] guest clock delta is within tolerance: 151.378942ms
	I0906 12:01:10.540639   12094 start.go:83] releasing machines lock for "ha-343000-m02", held for 13.594865643s
	I0906 12:01:10.540654   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.540778   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:10.562345   12094 out.go:177] * Found network options:
	I0906 12:01:10.583938   12094 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:01:10.604860   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.604892   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605507   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605705   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605840   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:01:10.605876   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:01:10.605977   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.606085   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:01:10.606085   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606109   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.606320   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606351   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606520   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606538   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606733   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.606776   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606943   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:01:10.641836   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:01:10.641895   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:01:10.688301   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:01:10.688319   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.688399   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:10.704168   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:01:10.713221   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:01:10.722234   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:01:10.722279   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:01:10.731269   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.740159   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:01:10.749214   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.758175   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:01:10.767634   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:01:10.776683   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:01:10.785787   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:01:10.794766   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:01:10.803033   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:01:10.811174   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:10.907940   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:01:10.926633   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.926708   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:01:10.940259   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.957368   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:01:10.981430   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.994068   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.004477   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:01:11.026305   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.036854   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:11.051822   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:01:11.054832   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:01:11.062232   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:01:11.076011   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:01:11.171774   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:01:11.275110   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:01:11.275140   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:01:11.288936   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:11.387536   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:02:12.406129   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.018456537s)
	I0906 12:02:12.406196   12094 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:02:12.441627   12094 out.go:201] 
	W0906 12:02:12.462568   12094 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:01:09 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.043870308Z" level=info msg="Starting up"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044354837Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044967157Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.060420044Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076676910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076721339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076763510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076773987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076859504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076892444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077013033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077047570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077059390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077066478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077150509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077343912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078819720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078854498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078962243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078995587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079114046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079161625Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.080994591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081080643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081116220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081128366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081138130Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081232741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081450103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081587892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081629697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081643361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081652352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081661711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081669570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081678662Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081687446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081695440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081703002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081710262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081725308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081734548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081742314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081750339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081759393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081767473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081774660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081782278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081789971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081798862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081806711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081823704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081834097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081843649Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081857316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081865237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081872738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081916471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081929926Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081937399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081945271Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081951561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081959071Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081965521Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082596203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082656975Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082684672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082843808Z" level=info msg="containerd successfully booted in 0.023145s"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.061791246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.091753353Z" level=info msg="Loading containers: start."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.248274667Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.308626646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.351686229Z" level=info msg="Loading containers: done."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359245186Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359419132Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.381469858Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:01:10 ha-343000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.384079790Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.489514557Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490667952Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490928769Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491093022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491132226Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:01:11 ha-343000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 dockerd[1161]: time="2024-09-06T19:01:12.525113343Z" level=info msg="Starting up"
	Sep 06 19:02:12 ha-343000-m02 dockerd[1161]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:02:12.462663   12094 out.go:270] * 
	W0906 12:02:12.463787   12094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:02:12.526575   12094 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:01:03 ha-343000 dockerd[1107]: time="2024-09-06T19:01:03.251096467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:24 ha-343000 dockerd[1101]: time="2024-09-06T19:01:24.624172789Z" level=info msg="ignoring event" container=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.624732214Z" level=info msg="shim disconnected" id=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d namespace=moby
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.625304050Z" level=warning msg="cleaning up after shim disconnected" id=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d namespace=moby
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.625348043Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634231704Z" level=info msg="shim disconnected" id=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634560318Z" level=warning msg="cleaning up after shim disconnected" id=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634621676Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1101]: time="2024-09-06T19:01:25.635351473Z" level=info msg="ignoring event" container=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484108279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484268287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484288916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484379400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474777447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474870901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474947529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.475057744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:02:01 ha-343000 dockerd[1101]: time="2024-09-06T19:02:01.947178002Z" level=info msg="ignoring event" container=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.947382933Z" level=info msg="shim disconnected" id=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 namespace=moby
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.947983068Z" level=warning msg="cleaning up after shim disconnected" id=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 namespace=moby
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.948026288Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.431689003Z" level=info msg="shim disconnected" id=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1101]: time="2024-09-06T19:02:05.432125006Z" level=info msg="ignoring event" container=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.432353887Z" level=warning msg="cleaning up after shim disconnected" id=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.432492086Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c30e1728fc822       045733566833c                                                                                         29 seconds ago       Exited              kube-controller-manager   2                   a60c98dede813       kube-controller-manager-ha-343000
	fa4173483b359       604f5db92eaa8                                                                                         32 seconds ago       Exited              kube-apiserver            2                   53ce3e0f02186       kube-apiserver-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         About a minute ago   Running             kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	051e748db656a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   3259bb347e186       storage-provisioner
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         7 minutes ago        Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	b3713b7090d8f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago        Exited              kube-vip                  0                   23f83874ced46       kube-vip-ha-343000
	416ce752ac8fd       2e96e5913fc06                                                                                         7 minutes ago        Exited              etcd                      0                   e9c6f06bcc129       etcd-ha-343000
	e17d9a49b80dc       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            0                   e1c6cd8558983       kube-scheduler-ha-343000
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0906 19:02:13.912910    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:13.914376    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:13.916084    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:13.918473    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:13.920228    2487 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036349] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007955] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.714820] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.755188] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.246507] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.781180] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.108807] systemd-fstab-generator[501]: Ignoring "noauto" option for root device
	[  +1.950391] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.261568] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +0.099812] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +0.114205] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +2.455299] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.094890] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.054578] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.048897] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.114113] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.445466] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[Sep 6 19:01] kauditd_printk_skb: 88 callbacks suppressed
	[ +21.676711] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"info","ts":"2024-09-06T19:02:11.355467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:11.355482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:11.355494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:11.355501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:11.492400Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:11.995224Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:12.497364Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:02:12.654837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:12.654885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:12.654901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:12.655042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:12.655073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:12.998366Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:13.499195Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:13.801295Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"6dbe4340aa302ff2","local-member-attributes":"{Name:ha-343000 ClientURLs:[https://192.169.0.24:2379]}","request-path":"/0/members/6dbe4340aa302ff2/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-06T19:02:13.803658Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:02:13.803681Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:02:13.813277Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:02:13.813317Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-06T19:02:13.956063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:14.000031Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	
	
	==> etcd [416ce752ac8f] <==
	2024/09/06 19:00:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:05.829059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.22398722s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-06T19:00:05.833678Z","caller":"traceutil/trace.go:171","msg":"trace[234218137] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"7.228606076s","start":"2024-09-06T18:59:58.605067Z","end":"2024-09-06T19:00:05.833673Z","steps":["trace[234218137] 'agreement among raft nodes before linearized reading'  (duration: 7.223987765s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:00:05.833696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:59:58.605031Z","time spent":"7.228658753s","remote":"127.0.0.1:58976","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	2024/09/06 19:00:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:05.900577Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:00:05.900661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:00:05.900726Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6dbe4340aa302ff2","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-06T19:00:05.902561Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902675Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902742Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902767Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902789Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902803Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.902808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.902818Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903077Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903113Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903226Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903260Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.905401Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.24:2380"}
	{"level":"info","ts":"2024-09-06T19:00:05.905481Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.24:2380"}
	{"level":"info","ts":"2024-09-06T19:00:05.905490Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-343000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.24:2380"],"advertise-client-urls":["https://192.169.0.24:2379"]}
	
	
	==> kernel <==
	 19:02:14 up 1 min,  0 users,  load average: 0.13, 0.06, 0.02
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fa4173483b35] <==
	I0906 19:01:41.578828       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:01:41.580198       1 server.go:142] Version: v1.31.0
	I0906 19:01:41.580268       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:01:41.924923       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:01:41.928767       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:01:41.931279       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:01:41.931403       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:01:41.931674       1 instance.go:232] Using reconciler: lease
	W0906 19:02:01.924600       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:01.924956       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:01.933589       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0906 19:02:01.933758       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [c30e1728fc82] <==
	I0906 19:01:44.954716       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:01:45.412135       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:01:45.412386       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:01:45.413610       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:01:45.413776       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:01:45.414123       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:01:45.414254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:02:05.417390       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	E0906 19:01:55.677538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError"
	W0906 19:02:02.496712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:02.496797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:02.939571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.24:39010->192.169.0.24:8443: read: connection reset by peer
	E0906 19:02:02.940033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.24:39010->192.169.0.24:8443: read: connection reset by peer" logger="UnhandledError"
	W0906 19:02:02.939571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.24:41900->192.169.0.24:8443: read: connection reset by peer
	E0906 19:02:02.940432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.24:41900->192.169.0.24:8443: read: connection reset by peer" logger="UnhandledError"
	W0906 19:02:05.069159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.069252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:05.223901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.224034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:05.985644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.985935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:07.751221       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:07.751297       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:08.534428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:08.534502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:09.228523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:09.228578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:10.309496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:10.309595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:10.913838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:10.914076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:13.134630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:13.134666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [e17d9a49b80d] <==
	E0906 18:57:43.584607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3acb7359-b948-41f1-bb46-78ba7ca6ab4e(default/busybox-7dff88458-x6w7h) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x6w7h"
	E0906 18:57:43.584627       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x6w7h\": pod busybox-7dff88458-x6w7h is already assigned to node \"ha-343000\"" pod="default/busybox-7dff88458-x6w7h"
	I0906 18:57:43.584740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x6w7h" node="ha-343000"
	E0906 18:57:43.585378       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jk74s\": pod busybox-7dff88458-jk74s is already assigned to node \"ha-343000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jk74s" node="ha-343000-m02"
	E0906 18:57:43.586332       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2a6cd3d8-0270-4be8-adee-f6509d6f7d6a(default/busybox-7dff88458-jk74s) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jk74s"
	E0906 18:57:43.586381       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jk74s\": pod busybox-7dff88458-jk74s is already assigned to node \"ha-343000-m02\"" pod="default/busybox-7dff88458-jk74s"
	I0906 18:57:43.586399       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jk74s" node="ha-343000-m02"
	E0906 18:57:43.737576       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-2j5md is already present in the active queue" pod="default/busybox-7dff88458-2j5md"
	E0906 18:58:13.148396       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zj66t\": pod kube-proxy-zj66t is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zj66t" node="ha-343000-m04"
	E0906 18:58:13.149107       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc9bbfbe-59d6-4ed5-acd0-d85ac97eb0f6(kube-system/kube-proxy-zj66t) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zj66t"
	E0906 18:58:13.149342       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zj66t\": pod kube-proxy-zj66t is already assigned to node \"ha-343000-m04\"" pod="kube-system/kube-proxy-zj66t"
	I0906 18:58:13.149401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zj66t" node="ha-343000-m04"
	E0906 18:58:13.149049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vbw2g\": pod kindnet-vbw2g is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vbw2g" node="ha-343000-m04"
	E0906 18:58:13.149550       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 73997222-df35-486b-a5c3-c245cfbde23e(kube-system/kindnet-vbw2g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vbw2g"
	E0906 18:58:13.149563       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vbw2g\": pod kindnet-vbw2g is already assigned to node \"ha-343000-m04\"" pod="kube-system/kindnet-vbw2g"
	I0906 18:58:13.149716       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vbw2g" node="ha-343000-m04"
	E0906 18:58:13.174957       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8hww6\": pod kube-proxy-8hww6 is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8hww6" node="ha-343000-m04"
	E0906 18:58:13.175481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod aa46eef9-733c-4f42-8c7c-ad0ed8009b8a(kube-system/kube-proxy-8hww6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8hww6"
	E0906 18:58:13.175757       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8hww6\": pod kube-proxy-8hww6 is already assigned to node \"ha-343000-m04\"" pod="kube-system/kube-proxy-8hww6"
	I0906 18:58:13.175909       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8hww6" node="ha-343000-m04"
	E0906 18:58:14.877822       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q6946\": pod kindnet-q6946 is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-q6946" node="ha-343000-m04"
	E0906 18:58:14.877973       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5c83531b-b03e-46db-9169-70bd1bf41235(kube-system/kindnet-q6946) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-q6946"
	E0906 18:58:14.878004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q6946\": pod kindnet-q6946 is already assigned to node \"ha-343000-m04\"" pod="kube-system/kindnet-q6946"
	I0906 18:58:14.878024       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-q6946" node="ha-343000-m04"
	E0906 19:00:05.908240       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:01:56 ha-343000 kubelet[1516]: E0906 19:01:56.476756    1516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-343000\" not found"
	Sep 06 19:02:01 ha-343000 kubelet[1516]: I0906 19:02:01.766212    1516 kubelet_node_status.go:72] "Attempting to register node" node="ha-343000"
	Sep 06 19:02:02 ha-343000 kubelet[1516]: I0906 19:02:02.146257    1516 scope.go:117] "RemoveContainer" containerID="6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d"
	Sep 06 19:02:02 ha-343000 kubelet[1516]: I0906 19:02:02.146866    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:02 ha-343000 kubelet[1516]: E0906 19:02:02.146989    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:03 ha-343000 kubelet[1516]: E0906 19:02:03.981737    1516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-343000"
	Sep 06 19:02:03 ha-343000 kubelet[1516]: E0906 19:02:03.981859    1516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-343000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.189750    1516 scope.go:117] "RemoveContainer" containerID="5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.190459    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.190562    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.271932    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.272125    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.478080    1516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-343000\" not found"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: E0906 19:02:07.051517    1516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-343000.17f2bcdb164062c9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-343000,UID:ha-343000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-343000,},FirstTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,LastTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-343000,}"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: I0906 19:02:07.201616    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: E0906 19:02:07.201695    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:08 ha-343000 kubelet[1516]: I0906 19:02:08.334205    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:08 ha-343000 kubelet[1516]: E0906 19:02:08.334395    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:10 ha-343000 kubelet[1516]: I0906 19:02:10.984635    1516 kubelet_node_status.go:72] "Attempting to register node" node="ha-343000"
	Sep 06 19:02:12 ha-343000 kubelet[1516]: I0906 19:02:12.223100    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:12 ha-343000 kubelet[1516]: E0906 19:02:12.223243    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: W0906 19:02:13.195842    1516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196051    1516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196151    1516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-343000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196187    1516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-343000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000: exit status 2 (151.670453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-343000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (148.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-343000 node delete m03 -v=7 --alsologtostderr: exit status 83 (177.413689ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-343000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-343000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:02:15.225776   12149 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:02:15.226689   12149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:02:15.226704   12149 out.go:358] Setting ErrFile to fd 2...
	I0906 12:02:15.226709   12149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:02:15.226887   12149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:02:15.227230   12149 mustload.go:65] Loading cluster: ha-343000
	I0906 12:02:15.227544   12149 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:02:15.227883   12149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.227929   12149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.236213   12149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56194
	I0906 12:02:15.236624   12149 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.237054   12149 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.237065   12149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.237313   12149 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.237433   12149 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:02:15.237515   12149 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.237580   12149 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:02:15.238534   12149 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:02:15.238804   12149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.238827   12149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.247008   12149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56196
	I0906 12:02:15.247334   12149 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.247651   12149 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.247660   12149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.247887   12149 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.248010   12149 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:02:15.248383   12149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.248408   12149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.256558   12149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56198
	I0906 12:02:15.256878   12149 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.257211   12149 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.257227   12149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.257467   12149 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.257600   12149 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:02:15.257685   12149 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.257755   12149 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:02:15.258696   12149 host.go:66] Checking if "ha-343000-m02" exists ...
	I0906 12:02:15.258936   12149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.258960   12149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.267281   12149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56200
	I0906 12:02:15.267624   12149 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.267978   12149 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.267993   12149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.268236   12149 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.268357   12149 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:02:15.268739   12149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.268763   12149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.277040   12149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56202
	I0906 12:02:15.277372   12149 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.277734   12149 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.277753   12149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.277962   12149 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.278067   12149 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:02:15.278152   12149 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.278232   12149 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:02:15.279165   12149 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:02:15.303815   12149 out.go:177] * The control-plane node ha-343000-m03 host is not running: state=Stopped
	I0906 12:02:15.323783   12149 out.go:177]   To start a cluster, run: "minikube start -p ha-343000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-amd64 -p ha-343000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr: exit status 7 (259.140724ms)

                                                
                                                
-- stdout --
	ha-343000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-343000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-343000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-343000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:02:15.403173   12156 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:02:15.403374   12156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:02:15.403381   12156 out.go:358] Setting ErrFile to fd 2...
	I0906 12:02:15.403385   12156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:02:15.403569   12156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:02:15.403752   12156 out.go:352] Setting JSON to false
	I0906 12:02:15.403776   12156 mustload.go:65] Loading cluster: ha-343000
	I0906 12:02:15.403831   12156 notify.go:220] Checking for updates...
	I0906 12:02:15.404087   12156 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:02:15.404103   12156 status.go:255] checking status of ha-343000 ...
	I0906 12:02:15.404468   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.404518   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.413300   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56205
	I0906 12:02:15.413679   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.414094   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.414103   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.414315   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.414427   12156 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:02:15.414507   12156 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.414578   12156 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:02:15.415557   12156 status.go:330] ha-343000 host status = "Running" (err=<nil>)
	I0906 12:02:15.415576   12156 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:02:15.415825   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.415847   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.424501   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56207
	I0906 12:02:15.424825   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.425143   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.425161   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.425391   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.425505   12156 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:02:15.425589   12156 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:02:15.425837   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.425862   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.434471   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56209
	I0906 12:02:15.434779   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.435147   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.435167   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.435370   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.435469   12156 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:02:15.435612   12156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:02:15.435636   12156 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:02:15.435714   12156 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:02:15.435786   12156 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:02:15.435862   12156 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:02:15.435944   12156 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:02:15.471420   12156 ssh_runner.go:195] Run: systemctl --version
	I0906 12:02:15.475668   12156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:02:15.486301   12156 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 12:02:15.486326   12156 api_server.go:166] Checking apiserver status ...
	I0906 12:02:15.486365   12156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 12:02:15.496457   12156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:02:15.496468   12156 status.go:422] ha-343000 apiserver status = Running (err=<nil>)
	I0906 12:02:15.496478   12156 status.go:257] ha-343000 status: &{Name:ha-343000 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:02:15.496491   12156 status.go:255] checking status of ha-343000-m02 ...
	I0906 12:02:15.496740   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.496761   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.505528   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56212
	I0906 12:02:15.505866   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.506224   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.506239   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.506449   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.506552   12156 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:02:15.506629   12156 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.506698   12156 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:02:15.507688   12156 status.go:330] ha-343000-m02 host status = "Running" (err=<nil>)
	I0906 12:02:15.507696   12156 host.go:66] Checking if "ha-343000-m02" exists ...
	I0906 12:02:15.507944   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.507966   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.516561   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56214
	I0906 12:02:15.516909   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.517229   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.517240   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.517449   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.517559   12156 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:02:15.517652   12156 host.go:66] Checking if "ha-343000-m02" exists ...
	I0906 12:02:15.517920   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.517942   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.526684   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56216
	I0906 12:02:15.527010   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.527332   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.527345   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.527563   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.527675   12156 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:02:15.527808   12156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:02:15.527819   12156 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:02:15.527902   12156 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:02:15.527977   12156 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:02:15.528060   12156 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:02:15.528147   12156 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:02:15.561883   12156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:02:15.572453   12156 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 12:02:15.572467   12156 api_server.go:166] Checking apiserver status ...
	I0906 12:02:15.572504   12156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0906 12:02:15.582356   12156 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:02:15.582365   12156 status.go:422] ha-343000-m02 apiserver status = Stopped (err=<nil>)
	I0906 12:02:15.582373   12156 status.go:257] ha-343000-m02 status: &{Name:ha-343000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:02:15.582383   12156 status.go:255] checking status of ha-343000-m03 ...
	I0906 12:02:15.582650   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.582671   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.591440   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56219
	I0906 12:02:15.591779   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.592130   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.592143   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.592387   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.592511   12156 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:02:15.592586   12156 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.592662   12156 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:02:15.593603   12156 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:02:15.593625   12156 status.go:330] ha-343000-m03 host status = "Stopped" (err=<nil>)
	I0906 12:02:15.593634   12156 status.go:343] host is not running, skipping remaining checks
	I0906 12:02:15.593640   12156 status.go:257] ha-343000-m03 status: &{Name:ha-343000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:02:15.593659   12156 status.go:255] checking status of ha-343000-m04 ...
	I0906 12:02:15.593921   12156 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:02:15.593957   12156 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:02:15.602471   12156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56221
	I0906 12:02:15.602789   12156 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:02:15.603111   12156 main.go:141] libmachine: Using API Version  1
	I0906 12:02:15.603121   12156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:02:15.603317   12156 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:02:15.603439   12156 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:02:15.603530   12156 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:02:15.603603   12156 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:02:15.604550   12156 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:02:15.604609   12156 status.go:330] ha-343000-m04 host status = "Stopped" (err=<nil>)
	I0906 12:02:15.604618   12156 status.go:343] host is not running, skipping remaining checks
	I0906 12:02:15.604624   12156 status.go:257] ha-343000-m04 status: &{Name:ha-343000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000: exit status 2 (152.769212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (2.169969912s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	| node    | ha-343000 node delete m03 -v=7                                                                                               | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:00:13
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:00:13.694390   12094 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:00:13.694568   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694575   12094 out.go:358] Setting ErrFile to fd 2...
	I0906 12:00:13.694584   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694756   12094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:00:13.696524   12094 out.go:352] Setting JSON to false
	I0906 12:00:13.721080   12094 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10784,"bootTime":1725638429,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:00:13.721173   12094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:00:13.742655   12094 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:00:13.784492   12094 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:00:13.784545   12094 notify.go:220] Checking for updates...
	I0906 12:00:13.827582   12094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:13.848323   12094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:00:13.869497   12094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:00:13.890655   12094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:00:13.911464   12094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:00:13.933299   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:13.933473   12094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:00:13.934147   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:13.934226   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:13.943846   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56123
	I0906 12:00:13.944225   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:13.944638   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:13.944649   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:13.944842   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:13.944971   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:13.973620   12094 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:00:13.994428   12094 start.go:297] selected driver: hyperkit
	I0906 12:00:13.994464   12094 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:13.994699   12094 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:00:13.994893   12094 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:13.995108   12094 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:00:14.004848   12094 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:00:14.008700   12094 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.008720   12094 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:00:14.011904   12094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:00:14.011944   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:14.011950   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:14.012025   12094 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:14.012136   12094 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:14.054473   12094 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:00:14.075405   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:14.075507   12094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:00:14.075533   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:14.075741   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:14.075759   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:14.075970   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.076999   12094 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:14.077104   12094 start.go:364] duration metric: took 81.424µs to acquireMachinesLock for "ha-343000"
	I0906 12:00:14.077136   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:14.077155   12094 fix.go:54] fixHost starting: 
	I0906 12:00:14.077547   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.077578   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:14.086539   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56125
	I0906 12:00:14.086911   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:14.087275   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:14.087288   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:14.087499   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:14.087626   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.087742   12094 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:00:14.087847   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.087908   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 10421
	I0906 12:00:14.088810   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.088857   12094 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:00:14.088881   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:00:14.088974   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:14.130290   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:00:14.151187   12094 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:00:14.151341   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.151358   12094 main.go:141] libmachine: (ha-343000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:00:14.152544   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.152554   12094 main.go:141] libmachine: (ha-343000) DBG | pid 10421 is in state "Stopped"
	I0906 12:00:14.152567   12094 main.go:141] libmachine: (ha-343000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid...
	I0906 12:00:14.152736   12094 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:00:14.278050   12094 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:00:14.278072   12094 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:14.278193   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278238   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278268   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:14.278300   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:14.278328   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:14.279797   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Pid is 12107
	I0906 12:00:14.280167   12094 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:00:14.280184   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.280255   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:00:14.282230   12094 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:00:14.282307   12094 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:14.282320   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:14.282355   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:14.282372   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:00:14.282386   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca170}
	I0906 12:00:14.282393   12094 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:00:14.282401   12094 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:00:14.282427   12094 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:00:14.283073   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:14.283250   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.283690   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:14.283700   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.283812   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:14.283907   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:14.284012   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284129   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284231   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:14.284358   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:14.284630   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:14.284642   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:14.288262   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:14.344998   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:14.345710   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.345724   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.345740   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.345751   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.732607   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:14.732636   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:14.847834   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.847852   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.847864   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.847895   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.848717   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:14.848731   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:00:20.456737   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:00:20.456790   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:00:20.456799   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:00:20.482344   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:00:49.356770   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:00:49.356783   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.356936   12094 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:00:49.356945   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.357080   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.357164   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.357260   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357348   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357460   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.357608   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.357783   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.357791   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:00:49.434653   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:00:49.434670   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.434810   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.434910   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.434998   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.435076   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.435208   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.435362   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.435373   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:00:49.507101   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:00:49.507131   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:00:49.507148   12094 buildroot.go:174] setting up certificates
	I0906 12:00:49.507157   12094 provision.go:84] configureAuth start
	I0906 12:00:49.507164   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.507301   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:49.507386   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.507484   12094 provision.go:143] copyHostCerts
	I0906 12:00:49.507518   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.507591   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:00:49.507599   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.508035   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:00:49.508249   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508290   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:00:49.508295   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508374   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:00:49.508511   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508554   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:00:49.508560   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508641   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:00:49.508778   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:00:49.908537   12094 provision.go:177] copyRemoteCerts
	I0906 12:00:49.908600   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:00:49.908618   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.908766   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.908869   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.908969   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.909081   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:49.950319   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:00:49.950395   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:00:49.969178   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:00:49.969240   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0906 12:00:49.988087   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:00:49.988150   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:00:50.007042   12094 provision.go:87] duration metric: took 499.867022ms to configureAuth
	I0906 12:00:50.007055   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:00:50.007239   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:50.007254   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:50.007383   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.007480   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.007568   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007658   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007737   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.007851   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.007970   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.007977   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:00:50.074324   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:00:50.074334   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:00:50.074409   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:00:50.074422   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.074584   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.074695   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074789   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074892   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.075030   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.075178   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.075221   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:00:50.150993   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:00:50.151016   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.151152   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.151245   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151341   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151440   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.151557   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.151697   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.151709   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:00:51.817119   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:00:51.817133   12094 machine.go:96] duration metric: took 37.533362432s to provisionDockerMachine
	I0906 12:00:51.817147   12094 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:00:51.817155   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:00:51.817165   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.817341   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:00:51.817358   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.817453   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.817539   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.817633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.817710   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.857455   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:00:51.860581   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:00:51.860594   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:00:51.860691   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:00:51.860881   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:00:51.860887   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:00:51.861099   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:00:51.869229   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:51.888403   12094 start.go:296] duration metric: took 71.247262ms for postStartSetup
	I0906 12:00:51.888426   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.888596   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:00:51.888609   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.888701   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.888782   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.889409   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.889522   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.930243   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:00:51.930305   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:00:51.984449   12094 fix.go:56] duration metric: took 37.907224883s for fixHost
	I0906 12:00:51.984473   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.984633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.984732   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984820   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984909   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.985037   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:51.985190   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:51.985198   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:00:52.050855   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649252.136473627
	
	I0906 12:00:52.050870   12094 fix.go:216] guest clock: 1725649252.136473627
	I0906 12:00:52.050876   12094 fix.go:229] Guest: 2024-09-06 12:00:52.136473627 -0700 PDT Remote: 2024-09-06 12:00:51.984463 -0700 PDT m=+38.325391256 (delta=152.010627ms)
	I0906 12:00:52.050893   12094 fix.go:200] guest clock delta is within tolerance: 152.010627ms
	I0906 12:00:52.050897   12094 start.go:83] releasing machines lock for "ha-343000", held for 37.97370768s
	I0906 12:00:52.050919   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051055   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:52.051151   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051468   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051587   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051648   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:00:52.051681   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051732   12094 ssh_runner.go:195] Run: cat /version.json
	I0906 12:00:52.051743   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051763   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051867   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.051920   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051954   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052063   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.052085   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.052169   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052247   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.086438   12094 ssh_runner.go:195] Run: systemctl --version
	I0906 12:00:52.137495   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:00:52.142191   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:00:52.142231   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:00:52.154446   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:00:52.154458   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.154552   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.172091   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:00:52.181012   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:00:52.190031   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.190079   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:00:52.199064   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.207848   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:00:52.216656   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.225515   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:00:52.234566   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:00:52.243255   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:00:52.252029   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:00:52.260858   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:00:52.268821   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:00:52.276765   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.377515   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:00:52.394471   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.394552   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:00:52.407063   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.418612   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:00:52.433923   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.444946   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.455717   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:00:52.478561   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.492332   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.507486   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:00:52.510450   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:00:52.518207   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:00:52.531443   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:00:52.631849   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:00:52.738034   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.738112   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:00:52.751782   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.847435   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:00:55.174969   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327505108s)
	I0906 12:00:55.175030   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:00:55.186551   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.197381   12094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:00:55.299777   12094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:00:55.398609   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.498794   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:00:55.512395   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.523922   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.617484   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:00:55.684124   12094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:00:55.684200   12094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:00:55.688892   12094 start.go:563] Will wait 60s for crictl version
	I0906 12:00:55.688940   12094 ssh_runner.go:195] Run: which crictl
	I0906 12:00:55.692913   12094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:00:55.719238   12094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:00:55.719311   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.738356   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.778738   12094 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:00:55.778787   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:55.779172   12094 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:00:55.783863   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.794970   12094 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:00:55.795055   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:55.795104   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.809713   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.809724   12094 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:00:55.809795   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.823764   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.823788   12094 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:00:55.823798   12094 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:00:55.823893   12094 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:00:55.823968   12094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:00:55.861417   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:55.861428   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:55.861437   12094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:00:55.861452   12094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:00:55.861532   12094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:00:55.861545   12094 kube-vip.go:115] generating kube-vip config ...
	I0906 12:00:55.861593   12094 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:00:55.875047   12094 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:00:55.875114   12094 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:00:55.875172   12094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:00:55.890674   12094 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:00:55.890728   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:00:55.898141   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:00:55.911696   12094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:00:55.925468   12094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:00:55.940252   12094 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:00:55.953658   12094 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:00:55.956513   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.965807   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:56.068757   12094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:00:56.082925   12094 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:00:56.082937   12094 certs.go:194] generating shared ca certs ...
	I0906 12:00:56.082949   12094 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.083129   12094 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:00:56.083206   12094 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:00:56.083216   12094 certs.go:256] generating profile certs ...
	I0906 12:00:56.083325   12094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:00:56.083344   12094 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:00:56.083361   12094 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.24 192.169.0.25 192.169.0.26 192.169.0.254]
	I0906 12:00:56.334331   12094 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 ...
	I0906 12:00:56.334349   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57: {Name:mke69baf11a7ce9368028746c3ea673d595b5389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.334927   12094 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 ...
	I0906 12:00:56.334938   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57: {Name:mk818d10389922964dda91749efae3a655d8f5d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.335204   12094 certs.go:381] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt
	I0906 12:00:56.335461   12094 certs.go:385] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key
	I0906 12:00:56.335705   12094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:00:56.335715   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:00:56.335738   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:00:56.335758   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:00:56.335778   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:00:56.335796   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:00:56.335815   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:00:56.335833   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:00:56.335852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:00:56.335940   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:00:56.335991   12094 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:00:56.335999   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:00:56.336041   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:00:56.336081   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:00:56.336121   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:00:56.336206   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:56.336250   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.336272   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.336292   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.336712   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:00:56.388979   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:00:56.414966   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:00:56.439775   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:00:56.466208   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:00:56.492195   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:00:56.512216   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:00:56.532441   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:00:56.552158   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:00:56.571661   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:00:56.591148   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:00:56.610631   12094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:00:56.624148   12094 ssh_runner.go:195] Run: openssl version
	I0906 12:00:56.628419   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:00:56.636965   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640480   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640510   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.644827   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:00:56.653067   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:00:56.661485   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665034   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665069   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.669468   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:00:56.677956   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:00:56.686368   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689913   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689948   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.694107   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:00:56.702602   12094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:00:56.706177   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:00:56.711002   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:00:56.715284   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:00:56.720202   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:00:56.724667   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:00:56.728981   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:00:56.733338   12094 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:56.733444   12094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:00:56.746587   12094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:00:56.754476   12094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:00:56.754485   12094 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:00:56.754526   12094 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:00:56.762271   12094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:00:56.762575   12094 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.762661   12094 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:00:56.762831   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.763230   12094 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.763419   12094 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf24aae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:00:56.763713   12094 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:00:56.763884   12094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:00:56.771199   12094 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:00:56.771211   12094 kubeadm.go:597] duration metric: took 16.721202ms to restartPrimaryControlPlane
	I0906 12:00:56.771216   12094 kubeadm.go:394] duration metric: took 37.882882ms to StartCluster
	I0906 12:00:56.771224   12094 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771295   12094 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.771611   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771827   12094 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:00:56.771840   12094 start.go:241] waiting for startup goroutines ...
	I0906 12:00:56.771853   12094 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:00:56.771974   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.815354   12094 out.go:177] * Enabled addons: 
	I0906 12:00:56.836135   12094 addons.go:510] duration metric: took 64.272275ms for enable addons: enabled=[]
	I0906 12:00:56.836233   12094 start.go:246] waiting for cluster config update ...
	I0906 12:00:56.836259   12094 start.go:255] writing updated cluster config ...
	I0906 12:00:56.858430   12094 out.go:201] 
	I0906 12:00:56.879711   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.879825   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.901995   12094 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:00:56.944141   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:56.944200   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:56.944408   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:56.944427   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:56.944549   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.945615   12094 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:56.945736   12094 start.go:364] duration metric: took 97.464µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:00:56.945762   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:56.945772   12094 fix.go:54] fixHost starting: m02
	I0906 12:00:56.946173   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:56.946201   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:56.955570   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56147
	I0906 12:00:56.955905   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:56.956247   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:56.956263   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:56.956475   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:56.956595   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:56.956699   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:00:56.956773   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:56.956871   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 10914
	I0906 12:00:56.957763   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:56.957792   12094 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:00:56.957800   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:00:56.957882   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:57.000302   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:00:57.021304   12094 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:00:57.021585   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.021622   12094 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:00:57.022935   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:57.022948   12094 main.go:141] libmachine: (ha-343000-m02) DBG | pid 10914 is in state "Stopped"
	I0906 12:00:57.023011   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:00:57.023381   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:00:57.049902   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:00:57.049929   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:57.050062   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050089   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050146   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:57.050177   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:57.050183   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:57.051588   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Pid is 12118
	I0906 12:00:57.051949   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:00:57.051968   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.052042   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:00:57.054138   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:00:57.054208   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:57.054228   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca2c7}
	I0906 12:00:57.054254   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:57.054281   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:57.054300   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:00:57.054322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:00:57.054328   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:00:57.054969   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:00:57.055183   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:57.055671   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:57.055682   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:57.055826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:00:57.055916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:00:57.056038   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056169   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056275   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:00:57.056401   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:57.056636   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:00:57.056647   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:57.059445   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:57.069382   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:57.070322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.070335   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.070343   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.070352   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.458835   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:57.458851   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:57.573579   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.573599   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.573609   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.573621   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.574503   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:57.574513   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:01:03.177947   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:01:03.178017   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:01:03.178029   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:01:03.201747   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:01:08.125551   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:01:08.125569   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125712   12094 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:01:08.125723   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125829   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.125916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.126006   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126090   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126176   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.126310   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.126460   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.126470   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:01:08.196553   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:01:08.196570   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.196738   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.196849   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.196938   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.197031   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.197164   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.197302   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.197315   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:01:08.265441   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:01:08.265457   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:01:08.265466   12094 buildroot.go:174] setting up certificates
	I0906 12:01:08.265473   12094 provision.go:84] configureAuth start
	I0906 12:01:08.265479   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.265616   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:08.265727   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.265818   12094 provision.go:143] copyHostCerts
	I0906 12:01:08.265852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.265899   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:01:08.265905   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.266042   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:01:08.266231   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266259   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:01:08.266263   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266340   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:01:08.266475   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266502   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:01:08.266507   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266580   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:01:08.266719   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:01:08.411000   12094 provision.go:177] copyRemoteCerts
	I0906 12:01:08.411052   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:01:08.411067   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.411204   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.411300   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.411401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.411487   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:08.448748   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:01:08.448826   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:01:08.467690   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:01:08.467754   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:01:08.486653   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:01:08.486713   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:01:08.505720   12094 provision.go:87] duration metric: took 240.238536ms to configureAuth
	I0906 12:01:08.505733   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:01:08.505898   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:01:08.505912   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:08.506045   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.506132   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.506232   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506324   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.506529   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.506694   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.506702   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:01:08.568618   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:01:08.568634   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:01:08.568774   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:01:08.568789   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.568942   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.569034   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569216   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.569394   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.569538   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.569591   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:01:08.641655   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:01:08.641670   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.641797   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.641898   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.641987   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.642088   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.642231   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.642380   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.642393   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:01:10.295573   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:01:10.295588   12094 machine.go:96] duration metric: took 13.23988234s to provisionDockerMachine
	I0906 12:01:10.295597   12094 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:01:10.295605   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:01:10.295615   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.295802   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:01:10.295816   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.295925   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.296020   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.296104   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.296195   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.338012   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:01:10.342178   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:01:10.342189   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:01:10.342305   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:01:10.342480   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:01:10.342486   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:01:10.342677   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:01:10.352005   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:01:10.386594   12094 start.go:296] duration metric: took 90.988002ms for postStartSetup
	I0906 12:01:10.386614   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.387260   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:01:10.387299   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.387908   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.388016   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.388130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.388217   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.425216   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:01:10.425274   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:01:10.478532   12094 fix.go:56] duration metric: took 13.532732174s for fixHost
	I0906 12:01:10.478558   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.478717   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.478826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.478930   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.479017   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.479147   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:10.479284   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:10.479291   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:01:10.540605   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649270.629925942
	
	I0906 12:01:10.540619   12094 fix.go:216] guest clock: 1725649270.629925942
	I0906 12:01:10.540624   12094 fix.go:229] Guest: 2024-09-06 12:01:10.629925942 -0700 PDT Remote: 2024-09-06 12:01:10.478547 -0700 PDT m=+56.819439281 (delta=151.378942ms)
	I0906 12:01:10.540635   12094 fix.go:200] guest clock delta is within tolerance: 151.378942ms
	I0906 12:01:10.540639   12094 start.go:83] releasing machines lock for "ha-343000-m02", held for 13.594865643s
	I0906 12:01:10.540654   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.540778   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:10.562345   12094 out.go:177] * Found network options:
	I0906 12:01:10.583938   12094 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:01:10.604860   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.604892   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605507   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605705   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605840   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:01:10.605876   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:01:10.605977   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.606085   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:01:10.606085   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606109   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.606320   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606351   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606520   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606538   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606733   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.606776   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606943   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:01:10.641836   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:01:10.641895   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:01:10.688301   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:01:10.688319   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.688399   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:10.704168   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:01:10.713221   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:01:10.722234   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:01:10.722279   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:01:10.731269   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.740159   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:01:10.749214   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.758175   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:01:10.767634   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:01:10.776683   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:01:10.785787   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:01:10.794766   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:01:10.803033   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:01:10.811174   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:10.907940   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:01:10.926633   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.926708   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:01:10.940259   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.957368   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:01:10.981430   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.994068   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.004477   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:01:11.026305   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.036854   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:11.051822   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:01:11.054832   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:01:11.062232   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:01:11.076011   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:01:11.171774   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:01:11.275110   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:01:11.275140   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:01:11.288936   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:11.387536   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:02:12.406129   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.018456537s)
	I0906 12:02:12.406196   12094 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:02:12.441627   12094 out.go:201] 
	W0906 12:02:12.462568   12094 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:01:09 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.043870308Z" level=info msg="Starting up"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044354837Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044967157Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.060420044Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076676910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076721339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076763510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076773987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076859504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076892444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077013033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077047570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077059390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077066478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077150509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077343912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078819720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078854498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078962243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078995587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079114046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079161625Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.080994591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081080643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081116220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081128366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081138130Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081232741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081450103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081587892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081629697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081643361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081652352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081661711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081669570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081678662Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081687446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081695440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081703002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081710262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081725308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081734548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081742314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081750339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081759393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081767473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081774660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081782278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081789971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081798862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081806711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081823704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081834097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081843649Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081857316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081865237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081872738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081916471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081929926Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081937399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081945271Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081951561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081959071Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081965521Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082596203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082656975Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082684672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082843808Z" level=info msg="containerd successfully booted in 0.023145s"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.061791246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.091753353Z" level=info msg="Loading containers: start."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.248274667Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.308626646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.351686229Z" level=info msg="Loading containers: done."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359245186Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359419132Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.381469858Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:01:10 ha-343000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.384079790Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.489514557Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490667952Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490928769Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491093022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491132226Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:01:11 ha-343000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 dockerd[1161]: time="2024-09-06T19:01:12.525113343Z" level=info msg="Starting up"
	Sep 06 19:02:12 ha-343000-m02 dockerd[1161]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:02:12.462663   12094 out.go:270] * 
	W0906 12:02:12.463787   12094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:02:12.526575   12094 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:01:03 ha-343000 dockerd[1107]: time="2024-09-06T19:01:03.251096467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:24 ha-343000 dockerd[1101]: time="2024-09-06T19:01:24.624172789Z" level=info msg="ignoring event" container=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.624732214Z" level=info msg="shim disconnected" id=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d namespace=moby
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.625304050Z" level=warning msg="cleaning up after shim disconnected" id=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d namespace=moby
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.625348043Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634231704Z" level=info msg="shim disconnected" id=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634560318Z" level=warning msg="cleaning up after shim disconnected" id=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634621676Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1101]: time="2024-09-06T19:01:25.635351473Z" level=info msg="ignoring event" container=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484108279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484268287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484288916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484379400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474777447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474870901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474947529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.475057744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:02:01 ha-343000 dockerd[1101]: time="2024-09-06T19:02:01.947178002Z" level=info msg="ignoring event" container=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.947382933Z" level=info msg="shim disconnected" id=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 namespace=moby
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.947983068Z" level=warning msg="cleaning up after shim disconnected" id=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 namespace=moby
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.948026288Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.431689003Z" level=info msg="shim disconnected" id=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1101]: time="2024-09-06T19:02:05.432125006Z" level=info msg="ignoring event" container=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.432353887Z" level=warning msg="cleaning up after shim disconnected" id=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.432492086Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c30e1728fc822       045733566833c                                                                                         32 seconds ago       Exited              kube-controller-manager   2                   a60c98dede813       kube-controller-manager-ha-343000
	fa4173483b359       604f5db92eaa8                                                                                         35 seconds ago       Exited              kube-apiserver            2                   53ce3e0f02186       kube-apiserver-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         About a minute ago   Running             kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	051e748db656a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   3259bb347e186       storage-provisioner
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         7 minutes ago        Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	b3713b7090d8f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago        Exited              kube-vip                  0                   23f83874ced46       kube-vip-ha-343000
	416ce752ac8fd       2e96e5913fc06                                                                                         7 minutes ago        Exited              etcd                      0                   e9c6f06bcc129       etcd-ha-343000
	e17d9a49b80dc       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            0                   e1c6cd8558983       kube-scheduler-ha-343000
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0906 19:02:16.855503    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:16.857189    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:16.858692    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:16.860374    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:16.862192    2681 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036349] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007955] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.714820] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.755188] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.246507] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.781180] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.108807] systemd-fstab-generator[501]: Ignoring "noauto" option for root device
	[  +1.950391] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.261568] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +0.099812] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +0.114205] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +2.455299] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.094890] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.054578] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.048897] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.114113] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.445466] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[Sep 6 19:01] kauditd_printk_skb: 88 callbacks suppressed
	[ +21.676711] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"info","ts":"2024-09-06T19:02:13.956063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:13.956216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:14.000031Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:14.500371Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:15.000538Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:02:15.255298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:15.255330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:15.255341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:15.255354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:15.255361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:15.502084Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:16.003131Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:16.503804Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:02:16.555243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:16.989684Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-06T19:02:16.989763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.001521319s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-06T19:02:16.989789Z","caller":"traceutil/trace.go:171","msg":"trace[1733632827] range","detail":"{range_begin:; range_end:; }","duration":"7.001560676s","start":"2024-09-06T19:02:09.988218Z","end":"2024-09-06T19:02:16.989779Z","steps":["trace[1733632827] 'agreement among raft nodes before linearized reading'  (duration: 7.001519451s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:02:16.989846Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> etcd [416ce752ac8f] <==
	2024/09/06 19:00:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:05.829059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.22398722s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-06T19:00:05.833678Z","caller":"traceutil/trace.go:171","msg":"trace[234218137] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"7.228606076s","start":"2024-09-06T18:59:58.605067Z","end":"2024-09-06T19:00:05.833673Z","steps":["trace[234218137] 'agreement among raft nodes before linearized reading'  (duration: 7.223987765s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:00:05.833696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:59:58.605031Z","time spent":"7.228658753s","remote":"127.0.0.1:58976","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	2024/09/06 19:00:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:05.900577Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:00:05.900661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:00:05.900726Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6dbe4340aa302ff2","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-06T19:00:05.902561Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902675Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902742Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902767Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902789Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902803Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.902808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.902818Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903077Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903113Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903226Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903260Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.905401Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.24:2380"}
	{"level":"info","ts":"2024-09-06T19:00:05.905481Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.24:2380"}
	{"level":"info","ts":"2024-09-06T19:00:05.905490Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-343000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.24:2380"],"advertise-client-urls":["https://192.169.0.24:2379"]}
	
	
	==> kernel <==
	 19:02:17 up 2 min,  0 users,  load average: 0.12, 0.06, 0.02
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fa4173483b35] <==
	I0906 19:01:41.578828       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:01:41.580198       1 server.go:142] Version: v1.31.0
	I0906 19:01:41.580268       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:01:41.924923       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:01:41.928767       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:01:41.931279       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:01:41.931403       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:01:41.931674       1 instance.go:232] Using reconciler: lease
	W0906 19:02:01.924600       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:01.924956       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:01.933589       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0906 19:02:01.933758       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [c30e1728fc82] <==
	I0906 19:01:44.954716       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:01:45.412135       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:01:45.412386       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:01:45.413610       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:01:45.413776       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:01:45.414123       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:01:45.414254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:02:05.417390       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	E0906 19:02:02.940432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.24:41900->192.169.0.24:8443: read: connection reset by peer" logger="UnhandledError"
	W0906 19:02:05.069159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.069252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:05.223901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.224034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:05.985644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.985935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:07.751221       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:07.751297       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:08.534428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:08.534502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:09.228523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:09.228578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:10.309496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:10.309595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:10.913838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:10.914076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:13.134630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:13.134666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:16.082933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:16.083031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:16.701161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:16.701192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:16.712129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:16.712179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [e17d9a49b80d] <==
	E0906 18:57:43.584607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3acb7359-b948-41f1-bb46-78ba7ca6ab4e(default/busybox-7dff88458-x6w7h) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x6w7h"
	E0906 18:57:43.584627       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x6w7h\": pod busybox-7dff88458-x6w7h is already assigned to node \"ha-343000\"" pod="default/busybox-7dff88458-x6w7h"
	I0906 18:57:43.584740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x6w7h" node="ha-343000"
	E0906 18:57:43.585378       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jk74s\": pod busybox-7dff88458-jk74s is already assigned to node \"ha-343000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jk74s" node="ha-343000-m02"
	E0906 18:57:43.586332       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2a6cd3d8-0270-4be8-adee-f6509d6f7d6a(default/busybox-7dff88458-jk74s) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jk74s"
	E0906 18:57:43.586381       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jk74s\": pod busybox-7dff88458-jk74s is already assigned to node \"ha-343000-m02\"" pod="default/busybox-7dff88458-jk74s"
	I0906 18:57:43.586399       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jk74s" node="ha-343000-m02"
	E0906 18:57:43.737576       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-2j5md is already present in the active queue" pod="default/busybox-7dff88458-2j5md"
	E0906 18:58:13.148396       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zj66t\": pod kube-proxy-zj66t is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zj66t" node="ha-343000-m04"
	E0906 18:58:13.149107       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc9bbfbe-59d6-4ed5-acd0-d85ac97eb0f6(kube-system/kube-proxy-zj66t) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zj66t"
	E0906 18:58:13.149342       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zj66t\": pod kube-proxy-zj66t is already assigned to node \"ha-343000-m04\"" pod="kube-system/kube-proxy-zj66t"
	I0906 18:58:13.149401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zj66t" node="ha-343000-m04"
	E0906 18:58:13.149049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vbw2g\": pod kindnet-vbw2g is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vbw2g" node="ha-343000-m04"
	E0906 18:58:13.149550       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 73997222-df35-486b-a5c3-c245cfbde23e(kube-system/kindnet-vbw2g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vbw2g"
	E0906 18:58:13.149563       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vbw2g\": pod kindnet-vbw2g is already assigned to node \"ha-343000-m04\"" pod="kube-system/kindnet-vbw2g"
	I0906 18:58:13.149716       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vbw2g" node="ha-343000-m04"
	E0906 18:58:13.174957       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8hww6\": pod kube-proxy-8hww6 is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8hww6" node="ha-343000-m04"
	E0906 18:58:13.175481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod aa46eef9-733c-4f42-8c7c-ad0ed8009b8a(kube-system/kube-proxy-8hww6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8hww6"
	E0906 18:58:13.175757       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8hww6\": pod kube-proxy-8hww6 is already assigned to node \"ha-343000-m04\"" pod="kube-system/kube-proxy-8hww6"
	I0906 18:58:13.175909       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8hww6" node="ha-343000-m04"
	E0906 18:58:14.877822       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q6946\": pod kindnet-q6946 is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-q6946" node="ha-343000-m04"
	E0906 18:58:14.877973       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5c83531b-b03e-46db-9169-70bd1bf41235(kube-system/kindnet-q6946) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-q6946"
	E0906 18:58:14.878004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q6946\": pod kindnet-q6946 is already assigned to node \"ha-343000-m04\"" pod="kube-system/kindnet-q6946"
	I0906 18:58:14.878024       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-q6946" node="ha-343000-m04"
	E0906 19:00:05.908240       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:02:02 ha-343000 kubelet[1516]: I0906 19:02:02.146866    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:02 ha-343000 kubelet[1516]: E0906 19:02:02.146989    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:03 ha-343000 kubelet[1516]: E0906 19:02:03.981737    1516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-343000"
	Sep 06 19:02:03 ha-343000 kubelet[1516]: E0906 19:02:03.981859    1516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-343000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.189750    1516 scope.go:117] "RemoveContainer" containerID="5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.190459    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.190562    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.271932    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.272125    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.478080    1516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-343000\" not found"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: E0906 19:02:07.051517    1516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-343000.17f2bcdb164062c9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-343000,UID:ha-343000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-343000,},FirstTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,LastTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-343000,}"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: I0906 19:02:07.201616    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: E0906 19:02:07.201695    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:08 ha-343000 kubelet[1516]: I0906 19:02:08.334205    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:08 ha-343000 kubelet[1516]: E0906 19:02:08.334395    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:10 ha-343000 kubelet[1516]: I0906 19:02:10.984635    1516 kubelet_node_status.go:72] "Attempting to register node" node="ha-343000"
	Sep 06 19:02:12 ha-343000 kubelet[1516]: I0906 19:02:12.223100    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:12 ha-343000 kubelet[1516]: E0906 19:02:12.223243    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: W0906 19:02:13.195842    1516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196051    1516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196151    1516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-343000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196187    1516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-343000"
	Sep 06 19:02:16 ha-343000 kubelet[1516]: W0906 19:02:16.267122    1516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 06 19:02:16 ha-343000 kubelet[1516]: E0906 19:02:16.267168    1516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:02:16 ha-343000 kubelet[1516]: E0906 19:02:16.479406    1516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-343000\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000: exit status 2 (153.51231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-343000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-343000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-343000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-343000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"Disa
bleDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-343000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.24\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"
docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.25\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.26\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.27\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"met
rics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],
\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000: exit status 2 (153.879279ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (2.113339521s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	| node    | ha-343000 node delete m03 -v=7                                                                                               | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:00:13
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:00:13.694390   12094 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:00:13.694568   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694575   12094 out.go:358] Setting ErrFile to fd 2...
	I0906 12:00:13.694584   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:00:13.694756   12094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:00:13.696524   12094 out.go:352] Setting JSON to false
	I0906 12:00:13.721080   12094 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10784,"bootTime":1725638429,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:00:13.721173   12094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:00:13.742655   12094 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:00:13.784492   12094 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:00:13.784545   12094 notify.go:220] Checking for updates...
	I0906 12:00:13.827582   12094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:13.848323   12094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:00:13.869497   12094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:00:13.890655   12094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:00:13.911464   12094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:00:13.933299   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:13.933473   12094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:00:13.934147   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:13.934226   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:13.943846   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56123
	I0906 12:00:13.944225   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:13.944638   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:13.944649   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:13.944842   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:13.944971   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:13.973620   12094 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:00:13.994428   12094 start.go:297] selected driver: hyperkit
	I0906 12:00:13.994464   12094 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:f
alse headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:13.994699   12094 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:00:13.994893   12094 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:13.995108   12094 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:00:14.004848   12094 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:00:14.008700   12094 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.008720   12094 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:00:14.011904   12094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:00:14.011944   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:14.011950   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:14.012025   12094 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:14.012136   12094 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:00:14.054473   12094 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:00:14.075405   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:14.075507   12094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:00:14.075533   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:14.075741   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:14.075759   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:14.075970   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.076999   12094 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:14.077104   12094 start.go:364] duration metric: took 81.424µs to acquireMachinesLock for "ha-343000"
	I0906 12:00:14.077136   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:14.077155   12094 fix.go:54] fixHost starting: 
	I0906 12:00:14.077547   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:14.077578   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:14.086539   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56125
	I0906 12:00:14.086911   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:14.087275   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:14.087288   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:14.087499   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:14.087626   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.087742   12094 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:00:14.087847   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.087908   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 10421
	I0906 12:00:14.088810   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.088857   12094 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:00:14.088881   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:00:14.088974   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:14.130290   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:00:14.151187   12094 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:00:14.151341   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.151358   12094 main.go:141] libmachine: (ha-343000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:00:14.152544   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 10421 missing from process table
	I0906 12:00:14.152554   12094 main.go:141] libmachine: (ha-343000) DBG | pid 10421 is in state "Stopped"
	I0906 12:00:14.152567   12094 main.go:141] libmachine: (ha-343000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid...
	I0906 12:00:14.152736   12094 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:00:14.278050   12094 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:00:14.278072   12094 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:14.278193   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278238   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a48d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:14.278268   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:14.278300   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:14.278328   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:14.279797   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 DEBUG: hyperkit: Pid is 12107
	I0906 12:00:14.280167   12094 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:00:14.280184   12094 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:14.280255   12094 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:00:14.282230   12094 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:00:14.282307   12094 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:14.282320   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:14.282355   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:14.282372   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:00:14.282386   12094 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca170}
	I0906 12:00:14.282393   12094 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:00:14.282401   12094 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:00:14.282427   12094 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:00:14.283073   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:14.283250   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:14.283690   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:14.283700   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:14.283812   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:14.283907   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:14.284012   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284129   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:14.284231   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:14.284358   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:14.284630   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:14.284642   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:14.288262   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:14.344998   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:14.345710   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.345724   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.345740   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.345751   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.732607   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:14.732636   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:14.847834   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:14.847852   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:14.847864   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:14.847895   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:14.848717   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:14.848731   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:00:20.456737   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:00:20.456790   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:00:20.456799   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:00:20.482344   12094 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:00:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:00:49.356770   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:00:49.356783   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.356936   12094 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:00:49.356945   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.357080   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.357164   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.357260   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357348   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.357460   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.357608   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.357783   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.357791   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:00:49.434653   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:00:49.434670   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.434810   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.434910   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.434998   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.435076   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.435208   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:49.435362   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:49.435373   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:00:49.507101   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:00:49.507131   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:00:49.507148   12094 buildroot.go:174] setting up certificates
	I0906 12:00:49.507157   12094 provision.go:84] configureAuth start
	I0906 12:00:49.507164   12094 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:00:49.507301   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:49.507386   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.507484   12094 provision.go:143] copyHostCerts
	I0906 12:00:49.507518   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.507591   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:00:49.507599   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:00:49.508035   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:00:49.508249   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508290   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:00:49.508295   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:00:49.508374   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:00:49.508511   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508554   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:00:49.508560   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:00:49.508641   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:00:49.508778   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:00:49.908537   12094 provision.go:177] copyRemoteCerts
	I0906 12:00:49.908600   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:00:49.908618   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:49.908766   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:49.908869   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:49.908969   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:49.909081   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:49.950319   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:00:49.950395   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:00:49.969178   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:00:49.969240   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0906 12:00:49.988087   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:00:49.988150   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:00:50.007042   12094 provision.go:87] duration metric: took 499.867022ms to configureAuth
	I0906 12:00:50.007055   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:00:50.007239   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:50.007254   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:50.007383   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.007480   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.007568   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007658   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.007737   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.007851   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.007970   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.007977   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:00:50.074324   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:00:50.074334   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:00:50.074409   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:00:50.074422   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.074584   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.074695   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074789   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.074892   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.075030   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.075178   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.075221   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:00:50.150993   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:00:50.151016   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:50.151152   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:50.151245   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151341   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:50.151440   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:50.151557   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:50.151697   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:50.151709   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:00:51.817119   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:00:51.817133   12094 machine.go:96] duration metric: took 37.533362432s to provisionDockerMachine
	I0906 12:00:51.817147   12094 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:00:51.817155   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:00:51.817165   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.817341   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:00:51.817358   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.817453   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.817539   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.817633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.817710   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.857455   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:00:51.860581   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:00:51.860594   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:00:51.860691   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:00:51.860881   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:00:51.860887   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:00:51.861099   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:00:51.869229   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:51.888403   12094 start.go:296] duration metric: took 71.247262ms for postStartSetup
	I0906 12:00:51.888426   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:51.888596   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:00:51.888609   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.888701   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.888782   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.889409   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.889522   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:51.930243   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:00:51.930305   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:00:51.984449   12094 fix.go:56] duration metric: took 37.907224883s for fixHost
	I0906 12:00:51.984473   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:51.984633   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:51.984732   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984820   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:51.984909   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:51.985037   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:51.985190   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:00:51.985198   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:00:52.050855   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649252.136473627
	
	I0906 12:00:52.050870   12094 fix.go:216] guest clock: 1725649252.136473627
	I0906 12:00:52.050876   12094 fix.go:229] Guest: 2024-09-06 12:00:52.136473627 -0700 PDT Remote: 2024-09-06 12:00:51.984463 -0700 PDT m=+38.325391256 (delta=152.010627ms)
	I0906 12:00:52.050893   12094 fix.go:200] guest clock delta is within tolerance: 152.010627ms
	I0906 12:00:52.050897   12094 start.go:83] releasing machines lock for "ha-343000", held for 37.97370768s
	I0906 12:00:52.050919   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051055   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:52.051151   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051468   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051587   12094 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:00:52.051648   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:00:52.051681   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051732   12094 ssh_runner.go:195] Run: cat /version.json
	I0906 12:00:52.051743   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:00:52.051763   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051867   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.051920   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:00:52.051954   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052063   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:00:52.052085   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.052169   12094 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:00:52.052247   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:00:52.086438   12094 ssh_runner.go:195] Run: systemctl --version
	I0906 12:00:52.137495   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:00:52.142191   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:00:52.142231   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:00:52.154446   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:00:52.154458   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.154552   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.172091   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:00:52.181012   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:00:52.190031   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.190079   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:00:52.199064   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.207848   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:00:52.216656   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:00:52.225515   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:00:52.234566   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:00:52.243255   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:00:52.252029   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:00:52.260858   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:00:52.268821   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:00:52.276765   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.377515   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:00:52.394471   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:00:52.394552   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:00:52.407063   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.418612   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:00:52.433923   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:00:52.444946   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.455717   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:00:52.478561   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:00:52.492332   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:00:52.507486   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:00:52.510450   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:00:52.518207   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:00:52.531443   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:00:52.631849   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:00:52.738034   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:00:52.738112   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:00:52.751782   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:52.847435   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:00:55.174969   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.327505108s)
	I0906 12:00:55.175030   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:00:55.186551   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.197381   12094 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:00:55.299777   12094 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:00:55.398609   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.498794   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:00:55.512395   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:00:55.523922   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:55.617484   12094 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:00:55.684124   12094 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:00:55.684200   12094 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:00:55.688892   12094 start.go:563] Will wait 60s for crictl version
	I0906 12:00:55.688940   12094 ssh_runner.go:195] Run: which crictl
	I0906 12:00:55.692913   12094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:00:55.719238   12094 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:00:55.719311   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.738356   12094 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:00:55.778738   12094 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:00:55.778787   12094 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:00:55.779172   12094 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:00:55.783863   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.794970   12094 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:00:55.795055   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:55.795104   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.809713   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.809724   12094 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:00:55.809795   12094 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:00:55.823764   12094 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:00:55.823788   12094 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:00:55.823798   12094 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:00:55.823893   12094 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:00:55.823968   12094 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:00:55.861417   12094 cni.go:84] Creating CNI manager for ""
	I0906 12:00:55.861428   12094 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:00:55.861437   12094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:00:55.861452   12094 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:00:55.861532   12094 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:00:55.861545   12094 kube-vip.go:115] generating kube-vip config ...
	I0906 12:00:55.861593   12094 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:00:55.875047   12094 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:00:55.875114   12094 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:00:55.875172   12094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:00:55.890674   12094 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:00:55.890728   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:00:55.898141   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:00:55.911696   12094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:00:55.925468   12094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:00:55.940252   12094 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:00:55.953658   12094 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:00:55.956513   12094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:00:55.965807   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:00:56.068757   12094 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:00:56.082925   12094 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:00:56.082937   12094 certs.go:194] generating shared ca certs ...
	I0906 12:00:56.082949   12094 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.083129   12094 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:00:56.083206   12094 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:00:56.083216   12094 certs.go:256] generating profile certs ...
	I0906 12:00:56.083325   12094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:00:56.083344   12094 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:00:56.083361   12094 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.24 192.169.0.25 192.169.0.26 192.169.0.254]
	I0906 12:00:56.334331   12094 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 ...
	I0906 12:00:56.334349   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57: {Name:mke69baf11a7ce9368028746c3ea673d595b5389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.334927   12094 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 ...
	I0906 12:00:56.334938   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57: {Name:mk818d10389922964dda91749efae3a655d8f5d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.335204   12094 certs.go:381] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt
	I0906 12:00:56.335461   12094 certs.go:385] copying /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key
	I0906 12:00:56.335705   12094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:00:56.335715   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:00:56.335738   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:00:56.335758   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:00:56.335778   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:00:56.335796   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:00:56.335815   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:00:56.335833   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:00:56.335852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:00:56.335940   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:00:56.335991   12094 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:00:56.335999   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:00:56.336041   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:00:56.336081   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:00:56.336121   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:00:56.336206   12094 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:00:56.336250   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.336272   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.336292   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.336712   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:00:56.388979   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:00:56.414966   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:00:56.439775   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:00:56.466208   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:00:56.492195   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:00:56.512216   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:00:56.532441   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:00:56.552158   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:00:56.571661   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:00:56.591148   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:00:56.610631   12094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:00:56.624148   12094 ssh_runner.go:195] Run: openssl version
	I0906 12:00:56.628419   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:00:56.636965   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640480   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.640510   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:00:56.644827   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:00:56.653067   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:00:56.661485   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665034   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.665069   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:00:56.669468   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:00:56.677956   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:00:56.686368   12094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689913   12094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.689948   12094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:00:56.694107   12094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:00:56.702602   12094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:00:56.706177   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:00:56.711002   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:00:56.715284   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:00:56.720202   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:00:56.724667   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:00:56.728981   12094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:00:56.733338   12094 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:00:56.733444   12094 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:00:56.746587   12094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:00:56.754476   12094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:00:56.754485   12094 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:00:56.754526   12094 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:00:56.762271   12094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:00:56.762575   12094 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.762661   12094 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:00:56.762831   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.763230   12094 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.763419   12094 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xf24aae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:00:56.763713   12094 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:00:56.763884   12094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:00:56.771199   12094 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:00:56.771211   12094 kubeadm.go:597] duration metric: took 16.721202ms to restartPrimaryControlPlane
	I0906 12:00:56.771216   12094 kubeadm.go:394] duration metric: took 37.882882ms to StartCluster
	I0906 12:00:56.771224   12094 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771295   12094 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:00:56.771611   12094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:00:56.771827   12094 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:00:56.771840   12094 start.go:241] waiting for startup goroutines ...
	I0906 12:00:56.771853   12094 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:00:56.771974   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.815354   12094 out.go:177] * Enabled addons: 
	I0906 12:00:56.836135   12094 addons.go:510] duration metric: took 64.272275ms for enable addons: enabled=[]
	I0906 12:00:56.836233   12094 start.go:246] waiting for cluster config update ...
	I0906 12:00:56.836259   12094 start.go:255] writing updated cluster config ...
	I0906 12:00:56.858430   12094 out.go:201] 
	I0906 12:00:56.879711   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:00:56.879825   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.901995   12094 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:00:56.944141   12094 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:00:56.944200   12094 cache.go:56] Caching tarball of preloaded images
	I0906 12:00:56.944408   12094 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:00:56.944427   12094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:00:56.944549   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:56.945615   12094 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:00:56.945736   12094 start.go:364] duration metric: took 97.464µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:00:56.945762   12094 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:00:56.945772   12094 fix.go:54] fixHost starting: m02
	I0906 12:00:56.946173   12094 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:00:56.946201   12094 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:00:56.955570   12094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56147
	I0906 12:00:56.955905   12094 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:00:56.956247   12094 main.go:141] libmachine: Using API Version  1
	I0906 12:00:56.956263   12094 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:00:56.956475   12094 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:00:56.956595   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:56.956699   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:00:56.956773   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:56.956871   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 10914
	I0906 12:00:56.957763   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:56.957792   12094 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:00:56.957800   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:00:56.957882   12094 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:00:57.000302   12094 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:00:57.021304   12094 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:00:57.021585   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.021622   12094 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:00:57.022935   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10914 missing from process table
	I0906 12:00:57.022948   12094 main.go:141] libmachine: (ha-343000-m02) DBG | pid 10914 is in state "Stopped"
	I0906 12:00:57.023011   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:00:57.023381   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:00:57.049902   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:00:57.049929   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:00:57.050062   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050089   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:00:57.050146   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:00:57.050177   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:00:57.050183   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:00:57.051588   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 DEBUG: hyperkit: Pid is 12118
	I0906 12:00:57.051949   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:00:57.051968   12094 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:00:57.052042   12094 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:00:57.054138   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:00:57.054208   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:00:57.054228   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca2c7}
	I0906 12:00:57.054254   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:00:57.054281   12094 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca27e}
	I0906 12:00:57.054300   12094 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:00:57.054322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:00:57.054328   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:00:57.054969   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:00:57.055183   12094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:00:57.055671   12094 machine.go:93] provisionDockerMachine start ...
	I0906 12:00:57.055682   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:00:57.055826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:00:57.055916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:00:57.056038   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056169   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:00:57.056275   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:00:57.056401   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:00:57.056636   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:00:57.056647   12094 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:00:57.059445   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:00:57.069382   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:00:57.070322   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.070335   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.070343   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.070352   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.458835   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:00:57.458851   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:00:57.573579   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:00:57.573599   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:00:57.573609   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:00:57.573621   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:00:57.574503   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:00:57.574513   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:00:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:01:03.177947   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:01:03.178017   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:01:03.178029   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:01:03.201747   12094 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:01:03 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:01:08.125551   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:01:08.125569   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125712   12094 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:01:08.125723   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.125829   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.125916   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.126006   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126090   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.126176   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.126310   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.126460   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.126470   12094 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:01:08.196553   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:01:08.196570   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.196738   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.196849   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.196938   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.197031   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.197164   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.197302   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.197315   12094 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:01:08.265441   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:01:08.265457   12094 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:01:08.265466   12094 buildroot.go:174] setting up certificates
	I0906 12:01:08.265473   12094 provision.go:84] configureAuth start
	I0906 12:01:08.265479   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:01:08.265616   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:08.265727   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.265818   12094 provision.go:143] copyHostCerts
	I0906 12:01:08.265852   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.265899   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:01:08.265905   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:01:08.266042   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:01:08.266231   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266259   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:01:08.266263   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:01:08.266340   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:01:08.266475   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266502   12094 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:01:08.266507   12094 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:01:08.266580   12094 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:01:08.266719   12094 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:01:08.411000   12094 provision.go:177] copyRemoteCerts
	I0906 12:01:08.411052   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:01:08.411067   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.411204   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.411300   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.411401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.411487   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:08.448748   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:01:08.448826   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:01:08.467690   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:01:08.467754   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:01:08.486653   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:01:08.486713   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:01:08.505720   12094 provision.go:87] duration metric: took 240.238536ms to configureAuth
	I0906 12:01:08.505733   12094 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:01:08.505898   12094 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:01:08.505912   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:08.506045   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.506132   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.506232   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506324   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.506401   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.506529   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.506694   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.506702   12094 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:01:08.568618   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:01:08.568634   12094 buildroot.go:70] root file system type: tmpfs
	I0906 12:01:08.568774   12094 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:01:08.568789   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.568942   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.569034   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.569216   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.569394   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.569538   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.569591   12094 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:01:08.641655   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:01:08.641670   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:08.641797   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:08.641898   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.641987   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:08.642088   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:08.642231   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:08.642380   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:08.642393   12094 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:01:10.295573   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:01:10.295588   12094 machine.go:96] duration metric: took 13.23988234s to provisionDockerMachine
	I0906 12:01:10.295597   12094 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:01:10.295605   12094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:01:10.295615   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.295802   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:01:10.295816   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.295925   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.296020   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.296104   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.296195   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.338012   12094 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:01:10.342178   12094 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:01:10.342189   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:01:10.342305   12094 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:01:10.342480   12094 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:01:10.342486   12094 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:01:10.342677   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:01:10.352005   12094 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:01:10.386594   12094 start.go:296] duration metric: took 90.988002ms for postStartSetup
	I0906 12:01:10.386614   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.387260   12094 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:01:10.387299   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.387908   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.388016   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.388130   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.388217   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.425216   12094 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:01:10.425274   12094 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:01:10.478532   12094 fix.go:56] duration metric: took 13.532732174s for fixHost
	I0906 12:01:10.478558   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.478717   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.478826   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.478930   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.479017   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.479147   12094 main.go:141] libmachine: Using SSH client type: native
	I0906 12:01:10.479284   12094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xdb8eea0] 0xdb91c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:01:10.479291   12094 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:01:10.540605   12094 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649270.629925942
	
	I0906 12:01:10.540619   12094 fix.go:216] guest clock: 1725649270.629925942
	I0906 12:01:10.540624   12094 fix.go:229] Guest: 2024-09-06 12:01:10.629925942 -0700 PDT Remote: 2024-09-06 12:01:10.478547 -0700 PDT m=+56.819439281 (delta=151.378942ms)
	I0906 12:01:10.540635   12094 fix.go:200] guest clock delta is within tolerance: 151.378942ms
	I0906 12:01:10.540639   12094 start.go:83] releasing machines lock for "ha-343000-m02", held for 13.594865643s
	I0906 12:01:10.540654   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.540778   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:01:10.562345   12094 out.go:177] * Found network options:
	I0906 12:01:10.583938   12094 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:01:10.604860   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.604892   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605507   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605705   12094 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:01:10.605840   12094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:01:10.605876   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:01:10.605977   12094 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:01:10.606085   12094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:01:10.606085   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606109   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:01:10.606320   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606351   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:01:10.606520   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606538   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:01:10.606733   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:01:10.606776   12094 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:01:10.606943   12094 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:01:10.641836   12094 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:01:10.641895   12094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:01:10.688301   12094 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:01:10.688319   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.688399   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:10.704168   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:01:10.713221   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:01:10.722234   12094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:01:10.722279   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:01:10.731269   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.740159   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:01:10.749214   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:01:10.758175   12094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:01:10.767634   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:01:10.776683   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:01:10.785787   12094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:01:10.794766   12094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:01:10.803033   12094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:01:10.811174   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:10.907940   12094 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:01:10.926633   12094 start.go:495] detecting cgroup driver to use...
	I0906 12:01:10.926708   12094 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:01:10.940259   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.957368   12094 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:01:10.981430   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:01:10.994068   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.004477   12094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:01:11.026305   12094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:01:11.036854   12094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:01:11.051822   12094 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:01:11.054832   12094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:01:11.062232   12094 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:01:11.076011   12094 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:01:11.171774   12094 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:01:11.275110   12094 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:01:11.275140   12094 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:01:11.288936   12094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:01:11.387536   12094 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:02:12.406129   12094 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.018456537s)
	I0906 12:02:12.406196   12094 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:02:12.441627   12094 out.go:201] 
	W0906 12:02:12.462568   12094 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:01:09 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.043870308Z" level=info msg="Starting up"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044354837Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:01:09 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:09.044967157Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=487
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.060420044Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076676910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076721339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076763510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076773987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076859504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.076892444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077013033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077047570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077059390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077066478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077150509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.077343912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078819720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078854498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078962243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.078995587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079114046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.079161625Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.080994591Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081080643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081116220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081128366Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081138130Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081232741Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081450103Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081587892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081629697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081643361Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081652352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081661711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081669570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081678662Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081687446Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081695440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081703002Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081710262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081725308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081734548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081742314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081750339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081759393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081767473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081774660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081782278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081789971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081798862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081806711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081823704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081834097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081843649Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081857316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081865237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081872738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081916471Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081929926Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081937399Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081945271Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081951561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081959071Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.081965521Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082596203Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082656975Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082684672Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:01:09 ha-343000-m02 dockerd[487]: time="2024-09-06T19:01:09.082843808Z" level=info msg="containerd successfully booted in 0.023145s"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.061791246Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.091753353Z" level=info msg="Loading containers: start."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.248274667Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.308626646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.351686229Z" level=info msg="Loading containers: done."
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359245186Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.359419132Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.381469858Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:01:10 ha-343000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:01:10 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:10.384079790Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.489514557Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490667952Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.490928769Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491093022Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:01:11 ha-343000-m02 dockerd[481]: time="2024-09-06T19:01:11.491132226Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:01:11 ha-343000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:01:12 ha-343000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:01:12 ha-343000-m02 dockerd[1161]: time="2024-09-06T19:01:12.525113343Z" level=info msg="Starting up"
	Sep 06 19:02:12 ha-343000-m02 dockerd[1161]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:02:12 ha-343000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:02:12.462663   12094 out.go:270] * 
	W0906 12:02:12.463787   12094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:02:12.526575   12094 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:01:03 ha-343000 dockerd[1107]: time="2024-09-06T19:01:03.251096467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:24 ha-343000 dockerd[1101]: time="2024-09-06T19:01:24.624172789Z" level=info msg="ignoring event" container=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.624732214Z" level=info msg="shim disconnected" id=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d namespace=moby
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.625304050Z" level=warning msg="cleaning up after shim disconnected" id=6e53daedacc02e4b9882bd9c12cf84c9a554ea154624b416268b53d71a4e0b7d namespace=moby
	Sep 06 19:01:24 ha-343000 dockerd[1107]: time="2024-09-06T19:01:24.625348043Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634231704Z" level=info msg="shim disconnected" id=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634560318Z" level=warning msg="cleaning up after shim disconnected" id=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1107]: time="2024-09-06T19:01:25.634621676Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:01:25 ha-343000 dockerd[1101]: time="2024-09-06T19:01:25.635351473Z" level=info msg="ignoring event" container=5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484108279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484268287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484288916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:41 ha-343000 dockerd[1107]: time="2024-09-06T19:01:41.484379400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474777447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474870901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.474947529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:01:44 ha-343000 dockerd[1107]: time="2024-09-06T19:01:44.475057744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:02:01 ha-343000 dockerd[1101]: time="2024-09-06T19:02:01.947178002Z" level=info msg="ignoring event" container=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.947382933Z" level=info msg="shim disconnected" id=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 namespace=moby
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.947983068Z" level=warning msg="cleaning up after shim disconnected" id=fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416 namespace=moby
	Sep 06 19:02:01 ha-343000 dockerd[1107]: time="2024-09-06T19:02:01.948026288Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.431689003Z" level=info msg="shim disconnected" id=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1101]: time="2024-09-06T19:02:05.432125006Z" level=info msg="ignoring event" container=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.432353887Z" level=warning msg="cleaning up after shim disconnected" id=c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b namespace=moby
	Sep 06 19:02:05 ha-343000 dockerd[1107]: time="2024-09-06T19:02:05.432492086Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c30e1728fc822       045733566833c                                                                                         35 seconds ago       Exited              kube-controller-manager   2                   a60c98dede813       kube-controller-manager-ha-343000
	fa4173483b359       604f5db92eaa8                                                                                         38 seconds ago       Exited              kube-apiserver            2                   53ce3e0f02186       kube-apiserver-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         About a minute ago   Running             kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         About a minute ago   Running             kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 minutes ago        Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         6 minutes ago        Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	051e748db656a       6e38f40d628db                                                                                         6 minutes ago        Exited              storage-provisioner       0                   3259bb347e186       storage-provisioner
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              7 minutes ago        Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         7 minutes ago        Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	b3713b7090d8f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago        Exited              kube-vip                  0                   23f83874ced46       kube-vip-ha-343000
	416ce752ac8fd       2e96e5913fc06                                                                                         7 minutes ago        Exited              etcd                      0                   e9c6f06bcc129       etcd-ha-343000
	e17d9a49b80dc       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            0                   e1c6cd8558983       kube-scheduler-ha-343000
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0906 19:02:19.582849    2862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:19.584690    2862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:19.586573    2862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:19.588049    2862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0906 19:02:19.589659    2862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036349] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007955] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.714820] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007157] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.755188] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.246507] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.781180] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +0.108807] systemd-fstab-generator[501]: Ignoring "noauto" option for root device
	[  +1.950391] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.261568] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +0.099812] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +0.114205] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +2.455299] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.094890] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.054578] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.048897] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +0.114113] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.445466] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[Sep 6 19:01] kauditd_printk_skb: 88 callbacks suppressed
	[ +21.676711] kauditd_printk_skb: 40 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"warn","ts":"2024-09-06T19:02:16.003131Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:02:16.503804Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402128,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:02:16.555243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:16.555846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:16.989684Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-06T19:02:16.989763Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.001521319s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-06T19:02:16.989789Z","caller":"traceutil/trace.go:171","msg":"trace[1733632827] range","detail":"{range_begin:; range_end:; }","duration":"7.001560676s","start":"2024-09-06T19:02:09.988218Z","end":"2024-09-06T19:02:16.989779Z","steps":["trace[1733632827] 'agreement among raft nodes before linearized reading'  (duration: 7.001519451s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:02:16.989846Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:02:17.855003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:17.855055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:17.855066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:17.855077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:17.855082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:02:18.803754Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:02:18.803923Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:02:18.813499Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:02:18.813493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-06T19:02:19.155189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:19.155242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:19.155257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:19.155275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:02:19.155285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	
	
	==> etcd [416ce752ac8f] <==
	2024/09/06 19:00:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:05.829059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.22398722s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-06T19:00:05.833678Z","caller":"traceutil/trace.go:171","msg":"trace[234218137] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"7.228606076s","start":"2024-09-06T18:59:58.605067Z","end":"2024-09-06T19:00:05.833673Z","steps":["trace[234218137] 'agreement among raft nodes before linearized reading'  (duration: 7.223987765s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:00:05.833696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:59:58.605031Z","time spent":"7.228658753s","remote":"127.0.0.1:58976","response type":"/etcdserverpb.KV/Range","request count":0,"request size":64,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/\" range_end:\"/registry/services/endpoints0\" count_only:true "}
	2024/09/06 19:00:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:05.900577Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:00:05.900661Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.24:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:00:05.900726Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6dbe4340aa302ff2","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-06T19:00:05.902561Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902616Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902675Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902742Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902767Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902789Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902798Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"637242e03e6dd2d1"}
	{"level":"info","ts":"2024-09-06T19:00:05.902803Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.902808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.902818Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903077Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903113Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903226Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.903260Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:00:05.905401Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.24:2380"}
	{"level":"info","ts":"2024-09-06T19:00:05.905481Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.24:2380"}
	{"level":"info","ts":"2024-09-06T19:00:05.905490Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-343000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.24:2380"],"advertise-client-urls":["https://192.169.0.24:2379"]}
	
	
	==> kernel <==
	 19:02:19 up 2 min,  0 users,  load average: 0.11, 0.06, 0.02
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fa4173483b35] <==
	I0906 19:01:41.578828       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:01:41.580198       1 server.go:142] Version: v1.31.0
	I0906 19:01:41.580268       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:01:41.924923       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:01:41.928767       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:01:41.931279       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:01:41.931403       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:01:41.931674       1 instance.go:232] Using reconciler: lease
	W0906 19:02:01.924600       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:01.924956       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:01.933589       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0906 19:02:01.933758       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [c30e1728fc82] <==
	I0906 19:01:44.954716       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:01:45.412135       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:01:45.412386       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:01:45.413610       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:01:45.413776       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:01:45.414123       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:01:45.414254       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:02:05.417390       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	E0906 19:02:02.940432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.24:41900->192.169.0.24:8443: read: connection reset by peer" logger="UnhandledError"
	W0906 19:02:05.069159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.069252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:05.223901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.224034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.169.0.24:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:05.985644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:05.985935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:07.751221       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:07.751297       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:08.534428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:08.534502       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:09.228523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:09.228578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:10.309496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:10.309595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:10.913838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:10.914076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:13.134630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:13.134666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:16.082933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:16.083031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:16.701161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:16.701192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:16.712129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:02:16.712179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-scheduler [e17d9a49b80d] <==
	E0906 18:57:43.584607       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3acb7359-b948-41f1-bb46-78ba7ca6ab4e(default/busybox-7dff88458-x6w7h) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-x6w7h"
	E0906 18:57:43.584627       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-x6w7h\": pod busybox-7dff88458-x6w7h is already assigned to node \"ha-343000\"" pod="default/busybox-7dff88458-x6w7h"
	I0906 18:57:43.584740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-x6w7h" node="ha-343000"
	E0906 18:57:43.585378       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jk74s\": pod busybox-7dff88458-jk74s is already assigned to node \"ha-343000-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jk74s" node="ha-343000-m02"
	E0906 18:57:43.586332       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2a6cd3d8-0270-4be8-adee-f6509d6f7d6a(default/busybox-7dff88458-jk74s) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-jk74s"
	E0906 18:57:43.586381       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jk74s\": pod busybox-7dff88458-jk74s is already assigned to node \"ha-343000-m02\"" pod="default/busybox-7dff88458-jk74s"
	I0906 18:57:43.586399       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jk74s" node="ha-343000-m02"
	E0906 18:57:43.737576       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-2j5md is already present in the active queue" pod="default/busybox-7dff88458-2j5md"
	E0906 18:58:13.148396       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-zj66t\": pod kube-proxy-zj66t is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-zj66t" node="ha-343000-m04"
	E0906 18:58:13.149107       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cc9bbfbe-59d6-4ed5-acd0-d85ac97eb0f6(kube-system/kube-proxy-zj66t) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-zj66t"
	E0906 18:58:13.149342       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-zj66t\": pod kube-proxy-zj66t is already assigned to node \"ha-343000-m04\"" pod="kube-system/kube-proxy-zj66t"
	I0906 18:58:13.149401       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-zj66t" node="ha-343000-m04"
	E0906 18:58:13.149049       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vbw2g\": pod kindnet-vbw2g is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vbw2g" node="ha-343000-m04"
	E0906 18:58:13.149550       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 73997222-df35-486b-a5c3-c245cfbde23e(kube-system/kindnet-vbw2g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vbw2g"
	E0906 18:58:13.149563       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vbw2g\": pod kindnet-vbw2g is already assigned to node \"ha-343000-m04\"" pod="kube-system/kindnet-vbw2g"
	I0906 18:58:13.149716       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vbw2g" node="ha-343000-m04"
	E0906 18:58:13.174957       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8hww6\": pod kube-proxy-8hww6 is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8hww6" node="ha-343000-m04"
	E0906 18:58:13.175481       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod aa46eef9-733c-4f42-8c7c-ad0ed8009b8a(kube-system/kube-proxy-8hww6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8hww6"
	E0906 18:58:13.175757       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8hww6\": pod kube-proxy-8hww6 is already assigned to node \"ha-343000-m04\"" pod="kube-system/kube-proxy-8hww6"
	I0906 18:58:13.175909       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8hww6" node="ha-343000-m04"
	E0906 18:58:14.877822       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q6946\": pod kindnet-q6946 is already assigned to node \"ha-343000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-q6946" node="ha-343000-m04"
	E0906 18:58:14.877973       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5c83531b-b03e-46db-9169-70bd1bf41235(kube-system/kindnet-q6946) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-q6946"
	E0906 18:58:14.878004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q6946\": pod kindnet-q6946 is already assigned to node \"ha-343000-m04\"" pod="kube-system/kindnet-q6946"
	I0906 18:58:14.878024       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-q6946" node="ha-343000-m04"
	E0906 19:00:05.908240       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.189750    1516 scope.go:117] "RemoveContainer" containerID="5bbe4cab1a8f31b319510cac2fdadc0d169b3be8e615b77083be1ab07153219b"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.190459    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.190562    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: I0906 19:02:06.271932    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.272125    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:06 ha-343000 kubelet[1516]: E0906 19:02:06.478080    1516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-343000\" not found"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: E0906 19:02:07.051517    1516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-343000.17f2bcdb164062c9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-343000,UID:ha-343000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-343000,},FirstTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,LastTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-343000,}"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: I0906 19:02:07.201616    1516 scope.go:117] "RemoveContainer" containerID="fa4173483b3595bacc5045728a7466c1399b41ff5aaaf17c4c54bdeb84229416"
	Sep 06 19:02:07 ha-343000 kubelet[1516]: E0906 19:02:07.201695    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-343000_kube-system(a0ae917c880d9b51d191e0dbdd03379a)\"" pod="kube-system/kube-apiserver-ha-343000" podUID="a0ae917c880d9b51d191e0dbdd03379a"
	Sep 06 19:02:08 ha-343000 kubelet[1516]: I0906 19:02:08.334205    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:08 ha-343000 kubelet[1516]: E0906 19:02:08.334395    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:10 ha-343000 kubelet[1516]: I0906 19:02:10.984635    1516 kubelet_node_status.go:72] "Attempting to register node" node="ha-343000"
	Sep 06 19:02:12 ha-343000 kubelet[1516]: I0906 19:02:12.223100    1516 scope.go:117] "RemoveContainer" containerID="c30e1728fc822395e8d5aa6a0ea2eb951ff769e60e6011806558f9469001a06b"
	Sep 06 19:02:12 ha-343000 kubelet[1516]: E0906 19:02:12.223243    1516 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-343000_kube-system(056539ba06e6ef6c96b262e562f5d9a0)\"" pod="kube-system/kube-controller-manager-ha-343000" podUID="056539ba06e6ef6c96b262e562f5d9a0"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: W0906 19:02:13.195842    1516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196051    1516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196151    1516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-343000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
	Sep 06 19:02:13 ha-343000 kubelet[1516]: E0906 19:02:13.196187    1516 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-343000"
	Sep 06 19:02:16 ha-343000 kubelet[1516]: W0906 19:02:16.267122    1516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 06 19:02:16 ha-343000 kubelet[1516]: E0906 19:02:16.267168    1516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:02:16 ha-343000 kubelet[1516]: E0906 19:02:16.479406    1516 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-343000\" not found"
	Sep 06 19:02:19 ha-343000 kubelet[1516]: W0906 19:02:19.340120    1516 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-343000&limit=500&resourceVersion=0": dial tcp 192.169.0.254:8443: connect: no route to host
	Sep 06 19:02:19 ha-343000 kubelet[1516]: E0906 19:02:19.340191    1516 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-343000&limit=500&resourceVersion=0\": dial tcp 192.169.0.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:02:19 ha-343000 kubelet[1516]: E0906 19:02:19.340559    1516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-343000.17f2bcdb164062c9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-343000,UID:ha-343000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-343000,},FirstTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,LastTimestamp:2024-09-06 19:00:56.393499337 +0000 UTC m=+0.182487992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-343000,}"
	Sep 06 19:02:20 ha-343000 kubelet[1516]: I0906 19:02:20.197238    1516 kubelet_node_status.go:72] "Attempting to register node" node="ha-343000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000: exit status 2 (156.698938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-343000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (160.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 stop -v=7 --alsologtostderr
E0906 12:02:59.513858    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:03:13.445725    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:03:41.158471    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:04:22.597781    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 stop -v=7 --alsologtostderr: (2m40.74445773s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr: exit status 7 (109.295239ms)

                                                
                                                
-- stdout --
	ha-343000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-343000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-343000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-343000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:05:01.643573   12244 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:05:01.643851   12244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.643858   12244 out.go:358] Setting ErrFile to fd 2...
	I0906 12:05:01.643862   12244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.644057   12244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:05:01.644259   12244 out.go:352] Setting JSON to false
	I0906 12:05:01.644282   12244 mustload.go:65] Loading cluster: ha-343000
	I0906 12:05:01.644321   12244 notify.go:220] Checking for updates...
	I0906 12:05:01.644598   12244 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:01.644613   12244 status.go:255] checking status of ha-343000 ...
	I0906 12:05:01.644996   12244 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:01.645030   12244 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:01.654144   12244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56290
	I0906 12:05:01.654556   12244 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:01.654973   12244 main.go:141] libmachine: Using API Version  1
	I0906 12:05:01.654982   12244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:01.655248   12244 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:01.655366   12244 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:01.655457   12244 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:01.655520   12244 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:05:01.656397   12244 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 12107 missing from process table
	I0906 12:05:01.656440   12244 status.go:330] ha-343000 host status = "Stopped" (err=<nil>)
	I0906 12:05:01.656450   12244 status.go:343] host is not running, skipping remaining checks
	I0906 12:05:01.656457   12244 status.go:257] ha-343000 status: &{Name:ha-343000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:05:01.656480   12244 status.go:255] checking status of ha-343000-m02 ...
	I0906 12:05:01.656714   12244 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:01.656737   12244 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:01.665038   12244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56293
	I0906 12:05:01.665349   12244 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:01.665663   12244 main.go:141] libmachine: Using API Version  1
	I0906 12:05:01.665677   12244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:01.665941   12244 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:01.666052   12244 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:05:01.666132   12244 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:01.666198   12244 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:05:01.667087   12244 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:01.667105   12244 status.go:330] ha-343000-m02 host status = "Stopped" (err=<nil>)
	I0906 12:05:01.667111   12244 status.go:343] host is not running, skipping remaining checks
	I0906 12:05:01.667117   12244 status.go:257] ha-343000-m02 status: &{Name:ha-343000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:05:01.667129   12244 status.go:255] checking status of ha-343000-m03 ...
	I0906 12:05:01.667389   12244 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:01.667415   12244 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:01.675873   12244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56295
	I0906 12:05:01.676192   12244 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:01.676526   12244 main.go:141] libmachine: Using API Version  1
	I0906 12:05:01.676546   12244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:01.676727   12244 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:01.676833   12244 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:05:01.676914   12244 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:01.676976   12244 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:05:01.677895   12244 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:05:01.677917   12244 status.go:330] ha-343000-m03 host status = "Stopped" (err=<nil>)
	I0906 12:05:01.677925   12244 status.go:343] host is not running, skipping remaining checks
	I0906 12:05:01.677931   12244 status.go:257] ha-343000-m03 status: &{Name:ha-343000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:05:01.677945   12244 status.go:255] checking status of ha-343000-m04 ...
	I0906 12:05:01.678195   12244 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:01.678225   12244 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:01.686649   12244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56297
	I0906 12:05:01.693140   12244 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:01.693498   12244 main.go:141] libmachine: Using API Version  1
	I0906 12:05:01.693509   12244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:01.693713   12244 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:01.693822   12244 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:05:01.693898   12244 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:01.693978   12244 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:05:01.694898   12244 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:05:01.694932   12244 status.go:330] ha-343000-m04 host status = "Stopped" (err=<nil>)
	I0906 12:05:01.694939   12244 status.go:343] host is not running, skipping remaining checks
	I0906 12:05:01.694946   12244 status.go:257] ha-343000-m04 status: &{Name:ha-343000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr": ha-343000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr": ha-343000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr": ha-343000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-343000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000: exit status 7 (69.273718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-343000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (160.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (219.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-343000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0906 12:07:59.516317    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:08:13.450104    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-343000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (3m35.481549118s)

                                                
                                                
-- stdout --
	* [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	* Restarting existing hyperkit VM for "ha-343000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	* Enabled addons: 
	
	* Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	* Restarting existing hyperkit VM for "ha-343000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.24
	* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	  - env NO_PROXY=192.169.0.24
	* Verifying Kubernetes components...
	
	* Starting "ha-343000-m03" control-plane node in "ha-343000" cluster
	* Restarting existing hyperkit VM for "ha-343000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.24,192.169.0.25
	* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	  - env NO_PROXY=192.169.0.24
	  - env NO_PROXY=192.169.0.24,192.169.0.25
	* Verifying Kubernetes components...
	
	* Starting "ha-343000-m04" worker node in "ha-343000" cluster
	* Restarting existing hyperkit VM for "ha-343000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:05:01.821113   12253 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:05:01.821396   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821403   12253 out.go:358] Setting ErrFile to fd 2...
	I0906 12:05:01.821407   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821585   12253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:05:01.822962   12253 out.go:352] Setting JSON to false
	I0906 12:05:01.845482   12253 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11072,"bootTime":1725638429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:05:01.845567   12253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:05:01.867344   12253 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:05:01.909192   12253 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:05:01.909251   12253 notify.go:220] Checking for updates...
	I0906 12:05:01.951681   12253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:01.972896   12253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:05:01.993997   12253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:05:02.014915   12253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:05:02.036376   12253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:05:02.058842   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:02.059362   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.059426   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.069603   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56303
	I0906 12:05:02.069962   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.070394   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.070407   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.070602   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.070721   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.070905   12253 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:05:02.071152   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.071173   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.079785   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56305
	I0906 12:05:02.080100   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.080480   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.080508   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.080753   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.080876   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.109151   12253 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:05:02.151203   12253 start.go:297] selected driver: hyperkit
	I0906 12:05:02.151225   12253 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gv
isor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.151398   12253 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:05:02.151526   12253 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.151681   12253 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:05:02.160708   12253 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:05:02.164397   12253 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.164417   12253 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:05:02.167034   12253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:05:02.167076   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:02.167082   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:02.167157   12253 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.167283   12253 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.209167   12253 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:05:02.230210   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:02.230284   12253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:05:02.230304   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:02.230523   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:02.230539   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:02.230657   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.231246   12253 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:02.231321   12253 start.go:364] duration metric: took 58.855µs to acquireMachinesLock for "ha-343000"
	I0906 12:05:02.231338   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:02.231348   12253 fix.go:54] fixHost starting: 
	I0906 12:05:02.231579   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.231602   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.240199   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56307
	I0906 12:05:02.240538   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.240898   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.240906   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.241115   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.241241   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.241344   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:02.241429   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.241509   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:05:02.242441   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 12107 missing from process table
	I0906 12:05:02.242473   12253 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:05:02.242488   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:05:02.242570   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:02.285299   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:05:02.308252   12253 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:05:02.308536   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.308568   12253 main.go:141] libmachine: (ha-343000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:05:02.308690   12253 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:05:02.418778   12253 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:05:02.418805   12253 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:02.418989   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419036   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419095   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:02.419142   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:02.419160   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:02.420829   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Pid is 12266
	I0906 12:05:02.421178   12253 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:05:02.421194   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.421256   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:02.422249   12253 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:05:02.422316   12253 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:02.422340   12253 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66db525c}
	I0906 12:05:02.422356   12253 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:05:02.422371   12253 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:05:02.422430   12253 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:05:02.423159   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:02.423357   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.423787   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:02.423798   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.423945   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:02.424057   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:02.424240   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424491   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:02.424632   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:02.424882   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:02.424892   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:02.428574   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:02.479264   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:02.479938   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.479953   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.479971   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.479984   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.867700   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:02.867715   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:02.983045   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.983079   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.983090   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.983110   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.983957   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:02.983967   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:08.596032   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:05:08.596072   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:05:08.596081   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:05:08.620302   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:05:13.496727   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:13.496743   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.496887   12253 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:05:13.496898   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.497005   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.497091   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.497190   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497290   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497391   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.497515   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.497658   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.497666   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:05:13.573506   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:05:13.573525   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.573649   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.573744   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573841   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573933   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.574054   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.574199   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.574210   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:13.646449   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:13.646474   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:13.646492   12253 buildroot.go:174] setting up certificates
	I0906 12:05:13.646500   12253 provision.go:84] configureAuth start
	I0906 12:05:13.646506   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.646647   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:13.646742   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.646835   12253 provision.go:143] copyHostCerts
	I0906 12:05:13.646872   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.646964   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:13.646972   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.647092   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:13.647297   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647337   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:13.647342   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647419   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:13.647566   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647604   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:13.647609   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647688   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:13.647833   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:05:13.694032   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:13.694082   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:13.694097   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.694208   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.694294   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.694394   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.694509   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:13.734054   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:13.734119   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:13.754153   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:13.754219   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 12:05:13.773776   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:13.773840   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:05:13.793258   12253 provision.go:87] duration metric: took 146.744964ms to configureAuth
	I0906 12:05:13.793272   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:13.793440   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:13.793455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:13.793596   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.793699   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.793786   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793872   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793955   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.794076   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.794207   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.794215   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:13.860967   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:13.860981   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:13.861068   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:13.861082   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.861205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.861297   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861411   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861521   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.861683   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.861822   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.861868   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:13.937805   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:13.937827   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.937964   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.938080   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938295   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.938419   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.938558   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.938571   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:15.619728   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:15.619742   12253 machine.go:96] duration metric: took 13.195921245s to provisionDockerMachine
	I0906 12:05:15.619754   12253 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:05:15.619762   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:15.619772   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.619950   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:15.619966   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.620058   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.620154   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.620257   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.620337   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.660028   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:15.663309   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:15.663323   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:15.663418   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:15.663631   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:15.663638   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:15.663848   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:15.671393   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:15.691128   12253 start.go:296] duration metric: took 71.364923ms for postStartSetup
	I0906 12:05:15.691156   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.691327   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:15.691341   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.691453   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.691544   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.691628   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.691712   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.732095   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:15.732157   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:15.785220   12253 fix.go:56] duration metric: took 13.553838389s for fixHost
	I0906 12:05:15.785242   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.785373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.785462   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785558   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785650   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.785774   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:15.785926   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:15.785933   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:15.851168   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649515.950195219
	
	I0906 12:05:15.851179   12253 fix.go:216] guest clock: 1725649515.950195219
	I0906 12:05:15.851184   12253 fix.go:229] Guest: 2024-09-06 12:05:15.950195219 -0700 PDT Remote: 2024-09-06 12:05:15.785232 -0700 PDT m=+13.999000936 (delta=164.963219ms)
	I0906 12:05:15.851205   12253 fix.go:200] guest clock delta is within tolerance: 164.963219ms
	I0906 12:05:15.851209   12253 start.go:83] releasing machines lock for "ha-343000", held for 13.619855055s
	I0906 12:05:15.851228   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851359   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:15.851455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851761   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851860   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851943   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:15.851974   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852006   12253 ssh_runner.go:195] Run: cat /version.json
	I0906 12:05:15.852029   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852070   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852126   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852163   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852217   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852273   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852292   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852391   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.852414   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.945582   12253 ssh_runner.go:195] Run: systemctl --version
	I0906 12:05:15.950518   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:05:15.954710   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:15.954750   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:15.972724   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:15.972739   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:15.972842   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:15.997626   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:16.009969   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:16.021002   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.021063   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:16.029939   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.039024   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:16.047772   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.056625   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:16.065543   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:16.074247   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:16.082976   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:16.091738   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:16.099691   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:16.107701   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.207522   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:16.227285   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:16.227363   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:16.242536   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.255682   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:16.272770   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.283410   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.293777   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:16.316221   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.326357   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:16.341265   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:16.344224   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:16.351341   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:16.364686   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:16.462680   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:16.567102   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.567167   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:16.581141   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.682906   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:19.018795   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.33586105s)
	I0906 12:05:19.018863   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:19.029907   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:19.042839   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.053183   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:19.161103   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:19.269627   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.376110   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:19.389292   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.400498   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.508773   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:19.574293   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:19.574369   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:19.578648   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:19.578702   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:19.581725   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:19.611289   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:19.611360   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.628755   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.690349   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:19.690435   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:19.690798   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:19.695532   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.705484   12253 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:05:19.705569   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:19.705619   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.718680   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.718691   12253 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:05:19.718764   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.731988   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.732008   12253 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:05:19.732017   12253 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:05:19.732095   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:19.732160   12253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:05:19.769790   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:19.769810   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:19.769820   12253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:05:19.769836   12253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:05:19.769924   12253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:05:19.769938   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:19.769993   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:19.783021   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:19.783091   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:19.783139   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:19.790731   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:19.790780   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:05:19.798087   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:05:19.811294   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:19.826571   12253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:05:19.840214   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:19.853805   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:19.856803   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.866597   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.969582   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:19.984116   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:05:19.984128   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:19.984139   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:19.984324   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:19.984402   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:19.984413   12253 certs.go:256] generating profile certs ...
	I0906 12:05:19.984529   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:19.984611   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:05:19.984683   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:19.984690   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:19.984715   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:19.984733   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:19.984750   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:19.984767   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:19.984795   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:19.984823   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:19.984846   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:19.984950   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:19.984995   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:19.985004   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:19.985045   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:19.985074   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:19.985102   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:19.985164   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:19.985201   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:19.985223   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:19.985241   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:19.985738   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:20.016977   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:20.040002   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:20.074896   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:20.096785   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:20.117992   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:20.152101   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:20.181980   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:20.249104   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:20.310747   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:20.334377   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:20.354759   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:05:20.368573   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:20.372727   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:20.381943   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385218   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385254   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.389369   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:20.398370   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:20.407468   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410735   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410769   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.414896   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:20.423953   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:20.432893   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436127   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436161   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.440280   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:20.449469   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:20.452834   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:20.457085   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:20.461715   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:20.466070   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:20.470282   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:20.474449   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:20.478690   12253 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:20.478796   12253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:05:20.491888   12253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:05:20.500336   12253 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:05:20.500348   12253 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:05:20.500388   12253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:05:20.508605   12253 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:05:20.508923   12253 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.509004   12253 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:05:20.509222   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.509871   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.510072   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:05:20.510389   12253 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:05:20.510569   12253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:05:20.518433   12253 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:05:20.518445   12253 kubeadm.go:597] duration metric: took 18.093623ms to restartPrimaryControlPlane
	I0906 12:05:20.518450   12253 kubeadm.go:394] duration metric: took 39.76917ms to StartCluster
	I0906 12:05:20.518463   12253 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.518535   12253 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.518965   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.519194   12253 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:20.519207   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:05:20.519217   12253 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:05:20.519329   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.562952   12253 out.go:177] * Enabled addons: 
	I0906 12:05:20.584902   12253 addons.go:510] duration metric: took 65.689522ms for enable addons: enabled=[]
	I0906 12:05:20.584940   12253 start.go:246] waiting for cluster config update ...
	I0906 12:05:20.584973   12253 start.go:255] writing updated cluster config ...
	I0906 12:05:20.608171   12253 out.go:201] 
	I0906 12:05:20.630349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.630488   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.652951   12253 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:05:20.695164   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:20.695203   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:20.695405   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:20.695421   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:20.695517   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.696367   12253 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:20.696454   12253 start.go:364] duration metric: took 67.794µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:05:20.696472   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:20.696479   12253 fix.go:54] fixHost starting: m02
	I0906 12:05:20.696771   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:20.696805   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:20.705845   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56329
	I0906 12:05:20.706183   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:20.706528   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:20.706543   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:20.706761   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:20.706875   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.706980   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:05:20.707064   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.707136   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:05:20.708055   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.708088   12253 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:05:20.708098   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:05:20.708185   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:20.734735   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:05:20.776747   12253 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:05:20.777073   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.777115   12253 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:05:20.778701   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.778717   12253 main.go:141] libmachine: (ha-343000-m02) DBG | pid 12118 is in state "Stopped"
	I0906 12:05:20.778778   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:05:20.779095   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:05:20.806950   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:05:20.806972   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:20.807155   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807233   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807304   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:20.807361   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:20.807374   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:20.808851   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Pid is 12276
	I0906 12:05:20.809435   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:05:20.809451   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.809514   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12276
	I0906 12:05:20.811081   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:05:20.811162   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:20.811181   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:05:20.811209   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca2f2}
	I0906 12:05:20.811220   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:05:20.811238   12253 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:05:20.811245   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:05:20.811904   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:20.812111   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.812569   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:20.812582   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.812711   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:20.812849   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:20.812941   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813131   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:20.813262   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:20.813401   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:20.813411   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:20.817160   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:20.825311   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:20.826263   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:20.826278   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:20.826305   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:20.826316   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.214947   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:21.214961   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:21.329668   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:21.329695   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:21.329711   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:21.329721   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.330549   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:21.330560   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:26.960134   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:05:26.960175   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:05:26.960183   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:05:26.984271   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:05:30.128139   12253 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.25:22: connect: connection refused
	I0906 12:05:33.191918   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:33.191932   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192104   12253 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:05:33.192113   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192203   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.192293   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.192374   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192456   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192573   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.192685   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.192834   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.192848   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:05:33.271080   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:05:33.271107   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.271242   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.271343   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271432   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271517   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.271653   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.271816   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.271828   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:33.340749   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:33.340766   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:33.340776   12253 buildroot.go:174] setting up certificates
	I0906 12:05:33.340781   12253 provision.go:84] configureAuth start
	I0906 12:05:33.340788   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.340917   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:33.341015   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.341102   12253 provision.go:143] copyHostCerts
	I0906 12:05:33.341127   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341183   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:33.341189   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341303   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:33.341481   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341516   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:33.341521   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341626   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:33.341793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341824   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:33.341829   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341902   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:33.342105   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:05:33.430053   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:33.430099   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:33.430112   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.430247   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.430337   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.430424   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.430498   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:33.468786   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:33.468854   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:33.488429   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:33.488502   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:05:33.507788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:33.507853   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:05:33.527149   12253 provision.go:87] duration metric: took 186.359429ms to configureAuth
	I0906 12:05:33.527164   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:33.527349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:33.527363   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:33.527493   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.527581   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.527670   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527752   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527834   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.527941   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.528081   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.528089   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:33.592983   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:33.592995   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:33.593066   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:33.593077   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.593197   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.593303   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593392   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593487   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.593630   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.593775   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.593821   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:33.669226   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:33.669253   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.669404   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.669513   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669628   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669726   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.669876   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.670026   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.670038   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:35.327313   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:35.327328   12253 machine.go:96] duration metric: took 14.51472045s to provisionDockerMachine
	I0906 12:05:35.327335   12253 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:05:35.327345   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:35.327357   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.327550   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:35.327564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.327658   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.327737   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.327824   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.327895   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.374953   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:35.380104   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:35.380118   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:35.380209   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:35.380346   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:35.380353   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:35.380535   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:35.392904   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:35.425316   12253 start.go:296] duration metric: took 97.970334ms for postStartSetup
	I0906 12:05:35.425336   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.425510   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:35.425521   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.425611   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.425700   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.425784   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.425866   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.465210   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:35.465270   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:35.519276   12253 fix.go:56] duration metric: took 14.822763667s for fixHost
	I0906 12:05:35.519322   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.519466   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.519564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519682   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519766   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.519897   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:35.520049   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:35.520058   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:35.586671   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649535.517793561
	
	I0906 12:05:35.586682   12253 fix.go:216] guest clock: 1725649535.517793561
	I0906 12:05:35.586690   12253 fix.go:229] Guest: 2024-09-06 12:05:35.517793561 -0700 PDT Remote: 2024-09-06 12:05:35.519294 -0700 PDT m=+33.733024449 (delta=-1.500439ms)
	I0906 12:05:35.586700   12253 fix.go:200] guest clock delta is within tolerance: -1.500439ms
	I0906 12:05:35.586703   12253 start.go:83] releasing machines lock for "ha-343000-m02", held for 14.890212868s
	I0906 12:05:35.586719   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.586869   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:35.609959   12253 out.go:177] * Found network options:
	I0906 12:05:35.631361   12253 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:05:35.652026   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.652053   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652675   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652820   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652904   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:35.652927   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:05:35.652986   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.653055   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:05:35.653068   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.653078   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653249   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653283   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653371   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653405   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653519   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653550   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.653617   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:05:35.689663   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:35.689725   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:35.741169   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:35.741183   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.741249   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:35.756280   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:35.765285   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:35.774250   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:35.774298   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:35.783141   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.792103   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:35.800998   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.809931   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:35.818930   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:35.828100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:35.837011   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:35.846071   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:35.854051   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:35.862225   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:35.953449   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:35.973036   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.973102   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:35.989701   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.002119   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:36.020969   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.032323   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.043370   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:36.064919   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.076134   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:36.091185   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:36.094041   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:36.101975   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:36.115524   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:36.210477   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:36.307446   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:36.307474   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:36.321506   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:36.425142   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:38.743512   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31834803s)
	I0906 12:05:38.743573   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:38.754689   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:38.767595   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:38.778550   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:38.871803   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:38.967444   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.077912   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:39.091499   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:39.102647   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.199868   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:39.269396   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:39.269473   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:39.274126   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:39.274176   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:39.279526   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:39.307628   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:39.307702   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.324272   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.363496   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:39.384323   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:05:39.405031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:39.405472   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:39.410152   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:39.420507   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:05:39.420684   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:39.420907   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.420932   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.430101   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56352
	I0906 12:05:39.430438   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.430796   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.430812   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.431028   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.431139   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:39.431212   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:39.431285   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:39.432244   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:05:39.432496   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.432518   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.441251   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56354
	I0906 12:05:39.441578   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.441903   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.441918   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.442138   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.442248   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:39.442348   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.25
	I0906 12:05:39.442355   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:39.442365   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:39.442516   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:39.442578   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:39.442588   12253 certs.go:256] generating profile certs ...
	I0906 12:05:39.442681   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:39.442772   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.7390dc12
	I0906 12:05:39.442830   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:39.442838   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:39.442859   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:39.442879   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:39.442896   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:39.442915   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:39.442951   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:39.442970   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:39.442987   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:39.443067   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:39.443106   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:39.443114   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:39.443147   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:39.443183   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:39.443212   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:39.443276   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:39.443310   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.443336   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.443355   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.443381   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:39.443473   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:39.443566   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:39.443662   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:39.443742   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:39.474601   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:05:39.477773   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:05:39.486087   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:05:39.489291   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:05:39.497797   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:05:39.500976   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:05:39.508902   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:05:39.512097   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:05:39.522208   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:05:39.529029   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:05:39.538558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:05:39.541788   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:05:39.551255   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:39.571163   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:39.590818   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:39.610099   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:39.629618   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:39.649203   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:39.668940   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:39.688319   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:39.707568   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:39.727593   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:39.746946   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:39.766191   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:05:39.779761   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:05:39.793389   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:05:39.807028   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:05:39.820798   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:05:39.834428   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:05:39.848169   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:05:39.861939   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:39.866268   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:39.875520   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878895   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878936   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.883242   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:39.892394   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:39.901475   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904880   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904919   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.909164   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:39.918366   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:39.927561   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.930968   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.931005   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.935325   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:39.944442   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:39.947919   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:39.952225   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:39.956510   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:39.960794   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:39.965188   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:39.969546   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:39.973805   12253 kubeadm.go:934] updating node {m02 192.169.0.25 8443 v1.31.0 docker true true} ...
	I0906 12:05:39.973869   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:39.973885   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:39.973920   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:39.987092   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:39.987133   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:39.987182   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:39.995535   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:39.995584   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:05:40.003762   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:05:40.017266   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:40.030719   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:40.044348   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:40.047310   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:40.057546   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.156340   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.171403   12253 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:40.171578   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:40.192574   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:05:40.213457   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.344499   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.359579   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:40.359776   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:05:40.359813   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:05:40.359973   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:05:40.360058   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:40.360063   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:40.360071   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:40.360075   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:47.989850   12253 round_trippers.go:574] Response Status:  in 7629 milliseconds
	I0906 12:05:48.990862   12253 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990895   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:48.990902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:48.990922   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:49.992764   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:49.992860   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:56357->192.169.0.24:8443: read: connection reset by peer
	I0906 12:05:49.992914   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:49.992923   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:49.992931   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:49.992938   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:50.992884   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:50.992985   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:50.992993   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:50.993001   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:50.993007   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:51.994156   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:51.994218   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:51.994272   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:51.994282   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:51.994293   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:51.994300   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:52.994610   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:52.994678   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:52.994684   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:52.994690   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:52.994695   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:53.996452   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:53.996513   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:53.996568   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:53.996577   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:53.996587   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:53.996600   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:54.996281   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:54.996431   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:54.996445   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:54.996456   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:54.996470   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997732   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:55.997791   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:55.997834   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:55.997841   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:55.997848   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997855   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998659   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:56.998737   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:56.998743   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:56.998748   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998753   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:57.998704   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:57.998768   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:57.998824   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:57.998830   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:57.998841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:57.998847   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.234879   12253 round_trippers.go:574] Response Status: 200 OK in 2236 milliseconds
	I0906 12:06:00.235584   12253 node_ready.go:49] node "ha-343000-m02" has status "Ready":"True"
	I0906 12:06:00.235597   12253 node_ready.go:38] duration metric: took 19.875567395s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:06:00.235604   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:00.235643   12253 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:06:00.235653   12253 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:06:00.235696   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:00.235701   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.235707   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.235711   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.262088   12253 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0906 12:06:00.268356   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.268408   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:00.268414   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.268421   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.268427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.271139   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.271625   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.271633   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.271638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.271642   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.273753   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.274136   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.274144   12253 pod_ready.go:82] duration metric: took 5.774893ms for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274150   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274179   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:00.274184   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.274189   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.274192   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.275924   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.276344   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.276351   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.276355   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.276360   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278001   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.278322   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.278329   12253 pod_ready.go:82] duration metric: took 4.174121ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278335   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278363   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:00.278368   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.278373   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278379   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.280145   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.280523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.280530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.280535   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.280540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.282107   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.282477   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.282486   12253 pod_ready.go:82] duration metric: took 4.146745ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282492   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282522   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.282528   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.282534   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.282537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.284223   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.284663   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.284670   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.284676   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.284679   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.286441   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.782726   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.782751   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.782796   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.782807   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.786175   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:00.786692   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.786700   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.786706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.786710   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.788874   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.283655   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.283671   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.283678   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.283683   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.285985   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.286465   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.286473   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.286481   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.286485   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.288565   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.782633   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.782651   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.782659   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.782664   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.785843   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:01.786296   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.786304   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.786309   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.786314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.788345   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.788771   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.788779   12253 pod_ready.go:82] duration metric: took 1.506279407s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788786   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788823   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:01.788828   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.788833   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.788838   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.790798   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:01.791160   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:01.791171   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.791184   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.791187   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.793250   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.793611   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.793620   12253 pod_ready.go:82] duration metric: took 4.828788ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.793631   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.837481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:01.837495   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.837504   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.837509   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.840718   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.037469   12253 request.go:632] Waited for 196.356353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037506   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037512   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.037520   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.037525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.040221   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.040550   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:02.040560   12253 pod_ready.go:82] duration metric: took 246.922589ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.040567   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.237374   12253 request.go:632] Waited for 196.770161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237419   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.237436   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.237442   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.240098   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.437383   12253 request.go:632] Waited for 196.723319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437429   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.437443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.437449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.440277   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.636447   12253 request.go:632] Waited for 94.227022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636509   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636516   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.636524   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.636528   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.640095   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.837639   12253 request.go:632] Waited for 197.104367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837707   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837717   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.837763   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.837788   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.841651   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:03.040768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.040781   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.040789   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.040793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.043403   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:03.236506   12253 request.go:632] Waited for 192.559607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236606   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236618   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.236631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.236637   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.240751   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.540928   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.540954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.540973   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.540980   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.545016   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.637802   12253 request.go:632] Waited for 92.404425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637881   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637890   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.637902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.637910   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.642163   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.041768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.041794   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.041804   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.041813   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.046193   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.047251   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.047260   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.047266   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.047277   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.056137   12253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 12:06:04.056428   12253 pod_ready.go:103] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:04.541406   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.541425   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.541434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.541439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.544224   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:04.544684   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.544691   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.544697   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.544707   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.547090   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.040907   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:05.040922   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.040930   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.040934   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.044733   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.045134   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:05.045143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.045149   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.045152   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.047168   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.047571   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.047581   12253 pod_ready.go:82] duration metric: took 3.007003521s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047587   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047621   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:05.047626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.047631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.047636   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.049432   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:05.236368   12253 request.go:632] Waited for 186.419986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236497   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236514   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.236525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.236532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.239828   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.240204   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.240214   12253 pod_ready.go:82] duration metric: took 192.620801ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.240220   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.435846   12253 request.go:632] Waited for 195.558833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435906   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.435914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.435921   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.438946   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.636650   12253 request.go:632] Waited for 197.107158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636711   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636719   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.636728   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.636733   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.639926   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.640212   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.640221   12253 pod_ready.go:82] duration metric: took 399.995302ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.640232   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.837401   12253 request.go:632] Waited for 197.103806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837486   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.837513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.837523   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.840662   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.035821   12253 request.go:632] Waited for 194.603254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035950   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.035962   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.035968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.039252   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.039561   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.039571   12253 pod_ready.go:82] duration metric: took 399.332528ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.039578   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.236804   12253 request.go:632] Waited for 197.127943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236841   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236849   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.236856   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.236861   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.239571   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.435983   12253 request.go:632] Waited for 195.836904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436083   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436095   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.436107   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.436115   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.440028   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.440297   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.440306   12253 pod_ready.go:82] duration metric: took 400.722778ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.440313   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.635911   12253 request.go:632] Waited for 195.558637ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635989   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635997   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.636005   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.636009   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.638766   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.836563   12253 request.go:632] Waited for 197.42239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836630   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836640   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.836651   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.836656   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.840182   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.840437   12253 pod_ready.go:93] pod "kube-proxy-8hww6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.840446   12253 pod_ready.go:82] duration metric: took 400.127213ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.840453   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.036000   12253 request.go:632] Waited for 195.50345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036078   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.036093   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.036101   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.039960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.237550   12253 request.go:632] Waited for 197.186932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237618   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237627   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.237638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.237645   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.241824   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:07.242186   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.242196   12253 pod_ready.go:82] duration metric: took 401.736827ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.242202   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.437080   12253 request.go:632] Waited for 194.824311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437120   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437127   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.437134   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.437177   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.439746   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:07.636668   12253 request.go:632] Waited for 196.435868ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636764   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636773   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.636784   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.636790   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.640555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.640971   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.640979   12253 pod_ready.go:82] duration metric: took 398.771488ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.640986   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.837782   12253 request.go:632] Waited for 196.72045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837885   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837895   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.837913   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.841222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.037474   12253 request.go:632] Waited for 195.707367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037543   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037551   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.037559   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.037564   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.041008   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.237863   12253 request.go:632] Waited for 96.589125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238009   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238027   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.238039   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.238064   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.241278   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.436102   12253 request.go:632] Waited for 194.439362ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436137   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.436151   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.436183   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.439043   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:08.642356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.642376   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.642388   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.642397   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.645933   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.837859   12253 request.go:632] Waited for 191.363155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837895   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837900   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.837911   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.841081   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.141167   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.141182   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.141191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.141195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.144158   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.235895   12253 request.go:632] Waited for 91.258445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235957   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.235972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.235977   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.239065   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.641494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.641508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.641517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.641521   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.644350   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.644757   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.644765   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.644771   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.644774   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.647091   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.647426   12253 pod_ready.go:103] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:10.141899   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:10.141923   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.141934   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.141941   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.145540   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.145973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.145981   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.145987   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.145989   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.148176   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.148538   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.148547   12253 pod_ready.go:82] duration metric: took 2.507551998s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.148554   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.235772   12253 request.go:632] Waited for 87.183047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235805   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235811   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.235831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.235849   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.238046   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.437551   12253 request.go:632] Waited for 199.151796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437619   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.437643   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.437648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.440639   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.440964   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.440974   12253 pod_ready.go:82] duration metric: took 292.414078ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.440981   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.636354   12253 request.go:632] Waited for 195.279783ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636426   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636437   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.636450   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.636456   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.641024   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:10.836907   12253 request.go:632] Waited for 195.513588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.836991   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.837001   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.837012   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.837020   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.840787   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.841194   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.841203   12253 pod_ready.go:82] duration metric: took 400.216153ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.841209   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.036390   12253 request.go:632] Waited for 195.137597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036488   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.036510   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.036517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.040104   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:11.236464   12253 request.go:632] Waited for 195.741522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.236507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.236513   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.244008   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:11.244389   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:11.244399   12253 pod_ready.go:82] duration metric: took 403.184015ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.244409   12253 pod_ready.go:39] duration metric: took 11.008775818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:11.244428   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:11.244490   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:11.260044   12253 api_server.go:72] duration metric: took 31.088552933s to wait for apiserver process to appear ...
	I0906 12:06:11.260057   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:11.260076   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:11.268665   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:11.268720   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:11.268725   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.268730   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.268734   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.269258   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:11.269330   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:11.269341   12253 api_server.go:131] duration metric: took 9.279203ms to wait for apiserver health ...
	I0906 12:06:11.269351   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:11.436974   12253 request.go:632] Waited for 167.586901ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437022   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437029   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.437043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.437047   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.441302   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:11.447157   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:11.447183   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447192   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447198   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.447201   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.447204   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.447208   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447211   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.447214   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.447218   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447223   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.447228   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.447232   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.447237   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.447241   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.447244   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.447247   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.447253   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.447258   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.447264   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.447268   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.447270   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.447273   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.447276   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.447294   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.447303   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.447308   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.447313   12253 system_pods.go:74] duration metric: took 177.956833ms to wait for pod list to return data ...
	I0906 12:06:11.447319   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:11.637581   12253 request.go:632] Waited for 190.208152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637651   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637657   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.637664   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.637668   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.650462   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:11.650666   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:11.650678   12253 default_sa.go:55] duration metric: took 203.353142ms for default service account to be created ...
	I0906 12:06:11.650687   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:11.837096   12253 request.go:632] Waited for 186.371823ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837128   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837134   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.837139   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.837143   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.866992   12253 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0906 12:06:11.873145   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:11.873167   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873175   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873181   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.873185   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.873188   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.873195   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873199   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.873202   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.873206   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873211   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.873215   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.873219   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.873223   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.873227   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.873231   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.873233   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.873236   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.873240   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.873244   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.873247   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.873252   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.873256   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.873259   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.873262   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.873265   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.873268   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.873274   12253 system_pods.go:126] duration metric: took 222.581886ms to wait for k8s-apps to be running ...
	I0906 12:06:11.873283   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:11.873340   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:11.886025   12253 system_svc.go:56] duration metric: took 12.733456ms WaitForService to wait for kubelet
	I0906 12:06:11.886050   12253 kubeadm.go:582] duration metric: took 31.714560483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:11.886086   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:12.036232   12253 request.go:632] Waited for 150.073414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036268   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036273   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:12.036286   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:12.036290   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:12.048789   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:12.049838   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049855   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049868   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049873   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049876   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049881   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049884   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049893   12253 node_conditions.go:105] duration metric: took 163.797553ms to run NodePressure ...
	I0906 12:06:12.049902   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:12.049922   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:12.087274   12253 out.go:201] 
	I0906 12:06:12.123635   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:12.123705   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.161370   12253 out.go:177] * Starting "ha-343000-m03" control-plane node in "ha-343000" cluster
	I0906 12:06:12.219408   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:12.219442   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:12.219591   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:12.219605   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:12.219694   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.220349   12253 start.go:360] acquireMachinesLock for ha-343000-m03: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:12.220455   12253 start.go:364] duration metric: took 68.753µs to acquireMachinesLock for "ha-343000-m03"
	I0906 12:06:12.220476   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:12.220482   12253 fix.go:54] fixHost starting: m03
	I0906 12:06:12.220813   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:12.220843   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:12.230327   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56369
	I0906 12:06:12.230794   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:12.231264   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:12.231284   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:12.231543   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:12.231691   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.231816   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:06:12.231923   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.232050   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:06:12.233006   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.233040   12253 fix.go:112] recreateIfNeeded on ha-343000-m03: state=Stopped err=<nil>
	I0906 12:06:12.233052   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	W0906 12:06:12.233162   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:12.271360   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m03" ...
	I0906 12:06:12.312281   12253 main.go:141] libmachine: (ha-343000-m03) Calling .Start
	I0906 12:06:12.312472   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.312588   12253 main.go:141] libmachine: (ha-343000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid
	I0906 12:06:12.314085   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.314111   12253 main.go:141] libmachine: (ha-343000-m03) DBG | pid 10460 is in state "Stopped"
	I0906 12:06:12.314145   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid...
	I0906 12:06:12.314314   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Using UUID 5abf6194-a669-4f35-b6fc-c88bfc629e81
	I0906 12:06:12.392247   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Generated MAC 3e:84:3d:bc:9c:31
	I0906 12:06:12.392279   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:12.392453   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392498   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392570   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5abf6194-a669-4f35-b6fc-c88bfc629e81", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:12.392621   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5abf6194-a669-4f35-b6fc-c88bfc629e81 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:12.392631   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:12.394468   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Pid is 12285
	I0906 12:06:12.395082   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Attempt 0
	I0906 12:06:12.395129   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.395296   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 12285
	I0906 12:06:12.398168   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Searching for 3e:84:3d:bc:9c:31 in /var/db/dhcpd_leases ...
	I0906 12:06:12.398286   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:12.398303   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:12.398316   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:12.398325   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:12.398339   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:06:12.398359   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found match: 3e:84:3d:bc:9c:31
	I0906 12:06:12.398382   12253 main.go:141] libmachine: (ha-343000-m03) DBG | IP: 192.169.0.26
	I0906 12:06:12.398414   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetConfigRaw
	I0906 12:06:12.399172   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:12.399462   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.400029   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:12.400042   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.400184   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:12.400344   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:12.400464   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400591   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400728   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:12.400904   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:12.401165   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:12.401176   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:12.404210   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:12.438119   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:12.439198   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.439227   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.439241   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.439256   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.845267   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:12.845282   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:12.960204   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.960224   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.960244   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.960258   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.961041   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:12.961054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:06:18.729819   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:06:18.729887   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:06:18.729898   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:06:18.753054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:06:23.465534   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:06:23.465548   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465717   12253 buildroot.go:166] provisioning hostname "ha-343000-m03"
	I0906 12:06:23.465726   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465818   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.465902   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.465981   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466055   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466146   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.466265   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.466412   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.466421   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m03 && echo "ha-343000-m03" | sudo tee /etc/hostname
	I0906 12:06:23.536843   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m03
	
	I0906 12:06:23.536860   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.536985   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.537079   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537171   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537236   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.537354   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.537507   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.537525   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:06:23.606665   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:06:23.606681   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:06:23.606695   12253 buildroot.go:174] setting up certificates
	I0906 12:06:23.606700   12253 provision.go:84] configureAuth start
	I0906 12:06:23.606707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.606846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:23.606946   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.607022   12253 provision.go:143] copyHostCerts
	I0906 12:06:23.607051   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607104   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:06:23.607112   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607235   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:06:23.607441   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607476   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:06:23.607482   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607552   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:06:23.607719   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607747   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:06:23.607752   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607836   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:06:23.607981   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m03 san=[127.0.0.1 192.169.0.26 ha-343000-m03 localhost minikube]
	I0906 12:06:23.699873   12253 provision.go:177] copyRemoteCerts
	I0906 12:06:23.699921   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:06:23.699935   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.700077   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.700175   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.700270   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.700376   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:23.737703   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:06:23.737771   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:06:23.757756   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:06:23.757827   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:06:23.777598   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:06:23.777673   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:06:23.797805   12253 provision.go:87] duration metric: took 191.09552ms to configureAuth
	I0906 12:06:23.797818   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:06:23.797988   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:23.798002   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:23.798134   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.798231   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.798314   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798400   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798488   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.798597   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.798724   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.798732   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:06:23.860492   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:06:23.860504   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:06:23.860586   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:06:23.860599   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.860730   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.860807   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.860907   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.861010   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.861140   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.861285   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.861332   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:06:23.935021   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:06:23.935039   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.935186   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.935286   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935371   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935478   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.935609   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.935750   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.935762   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:06:25.580352   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:06:25.580366   12253 machine.go:96] duration metric: took 13.180301802s to provisionDockerMachine
	I0906 12:06:25.580373   12253 start.go:293] postStartSetup for "ha-343000-m03" (driver="hyperkit")
	I0906 12:06:25.580380   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:06:25.580394   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.580572   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:06:25.580585   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.580672   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.580761   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.580846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.580931   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.621691   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:06:25.626059   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:06:25.626069   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:06:25.626156   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:06:25.626292   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:06:25.626299   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:06:25.626479   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:06:25.640080   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:25.666256   12253 start.go:296] duration metric: took 85.87411ms for postStartSetup
	I0906 12:06:25.666279   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.666455   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:06:25.666469   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.666570   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.666655   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.666734   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.666815   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.704275   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:06:25.704337   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:06:25.737458   12253 fix.go:56] duration metric: took 13.516946704s for fixHost
	I0906 12:06:25.737482   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.737626   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.737732   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737832   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737920   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.738049   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:25.738192   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:25.738199   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:06:25.803149   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649585.904544960
	
	I0906 12:06:25.803162   12253 fix.go:216] guest clock: 1725649585.904544960
	I0906 12:06:25.803168   12253 fix.go:229] Guest: 2024-09-06 12:06:25.90454496 -0700 PDT Remote: 2024-09-06 12:06:25.737472 -0700 PDT m=+83.951104505 (delta=167.07296ms)
	I0906 12:06:25.803178   12253 fix.go:200] guest clock delta is within tolerance: 167.07296ms
	I0906 12:06:25.803182   12253 start.go:83] releasing machines lock for "ha-343000-m03", held for 13.582690615s
	I0906 12:06:25.803198   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.803329   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:25.825405   12253 out.go:177] * Found network options:
	I0906 12:06:25.846508   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25
	W0906 12:06:25.867569   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.867608   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.867639   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868819   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:06:25.868894   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	W0906 12:06:25.868907   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.868930   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.869032   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:06:25.869046   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.869089   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869194   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869217   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869337   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869358   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869516   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.869640   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	W0906 12:06:25.904804   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:06:25.904860   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:06:25.953607   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:06:25.953623   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:25.953707   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:25.969069   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:06:25.977320   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:06:25.985732   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:06:25.985790   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:06:25.994169   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.002564   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:06:26.011076   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.019409   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:06:26.027829   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:06:26.036100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:06:26.044789   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:06:26.053382   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:06:26.060878   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:06:26.068234   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.161656   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:06:26.180419   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:26.180540   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:06:26.197783   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.208495   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:06:26.223788   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.234758   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.245879   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:06:26.268201   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.279748   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:26.298675   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:06:26.301728   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:06:26.309959   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:06:26.323781   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:06:26.418935   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:06:26.520404   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:06:26.520429   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:06:26.534785   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.635772   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:06:28.931869   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296074778s)
	I0906 12:06:28.931929   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:06:28.943824   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:06:28.959441   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:28.970674   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:06:29.066042   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:06:29.168956   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.286202   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:06:29.299988   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:29.311495   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.429259   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:06:29.496621   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:06:29.496705   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:06:29.502320   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:06:29.502374   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:06:29.505587   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:06:29.534004   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:06:29.534083   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.551834   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.590600   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:06:29.632268   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:06:29.653333   12253 out.go:177]   - env NO_PROXY=192.169.0.24,192.169.0.25
	I0906 12:06:29.674153   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:29.674373   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:06:29.677525   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:29.687202   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:06:29.687389   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:29.687610   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.687639   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.696472   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56391
	I0906 12:06:29.696894   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.697234   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.697246   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.697502   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.697641   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:06:29.697736   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:29.697809   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:06:29.698794   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:06:29.699046   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.699070   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.707791   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56393
	I0906 12:06:29.708136   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.708457   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.708468   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.708696   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.708812   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:06:29.708911   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.26
	I0906 12:06:29.708917   12253 certs.go:194] generating shared ca certs ...
	I0906 12:06:29.708928   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:06:29.709069   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:06:29.709123   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:06:29.709132   12253 certs.go:256] generating profile certs ...
	I0906 12:06:29.709257   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:06:29.709340   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.e464bc73
	I0906 12:06:29.709394   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:06:29.709401   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:06:29.709422   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:06:29.709447   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:06:29.709465   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:06:29.709482   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:06:29.709510   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:06:29.709528   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:06:29.709550   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:06:29.709623   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:06:29.709661   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:06:29.709669   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:06:29.709702   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:06:29.709732   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:06:29.709766   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:06:29.709833   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:29.709868   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:06:29.709889   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:06:29.709908   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:29.709932   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:06:29.710030   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:06:29.710110   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:06:29.710211   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:06:29.710304   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:06:29.742607   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:06:29.746569   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:06:29.754558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:06:29.757841   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:06:29.765881   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:06:29.769140   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:06:29.778234   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:06:29.781483   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:06:29.789701   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:06:29.792877   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:06:29.801155   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:06:29.804562   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:06:29.812907   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:06:29.833527   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:06:29.854042   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:06:29.874274   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:06:29.894675   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:06:29.914759   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:06:29.935020   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:06:29.955774   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:06:29.976174   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:06:29.996348   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:06:30.016705   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:06:30.036752   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:06:30.050816   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:06:30.064469   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:06:30.078121   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:06:30.092155   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:06:30.106189   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:06:30.120313   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:06:30.134091   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:06:30.138549   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:06:30.147484   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151103   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151157   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.155470   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:06:30.164282   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:06:30.173035   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176736   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176783   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.181161   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:06:30.189862   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:06:30.198669   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202224   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202268   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.206651   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:06:30.215322   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:06:30.218903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:06:30.223374   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:06:30.227903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:06:30.232564   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:06:30.237667   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:06:30.242630   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:06:30.247576   12253 kubeadm.go:934] updating node {m03 192.169.0.26 8443 v1.31.0 docker true true} ...
	I0906 12:06:30.247652   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:06:30.247670   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:06:30.247719   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:06:30.261197   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:06:30.261239   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:06:30.261300   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:06:30.269438   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:06:30.269496   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:06:30.277362   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:06:30.291520   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:06:30.305340   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:06:30.319495   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:06:30.322637   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:30.332577   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.441240   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.456369   12253 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:06:30.456602   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:30.477910   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:06:30.498557   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.628440   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.645947   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:06:30.646165   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:06:30.646208   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:06:30.646371   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.646412   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:30.646417   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.646423   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.646427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649121   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:30.649426   12253 node_ready.go:49] node "ha-343000-m03" has status "Ready":"True"
	I0906 12:06:30.649435   12253 node_ready.go:38] duration metric: took 3.055625ms for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.649441   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:30.649480   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:30.649485   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.649491   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.655093   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:30.660461   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:30.660533   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:30.660539   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.660545   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.660550   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.664427   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:30.664864   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:30.664872   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.664877   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.664880   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.667569   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.161508   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.161522   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.161528   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.161531   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.164411   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.165052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.165061   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.165070   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.165074   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.167897   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.660843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.660861   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.660868   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.660871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.668224   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:31.668938   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.668954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.668969   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.668987   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.674737   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:32.161451   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.161468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.161496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.161501   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.164555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.165061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.165069   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.165075   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.165078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.167689   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.661269   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.661285   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.661294   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.661316   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.664943   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.665460   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.665469   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.665475   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.665479   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.667934   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.668229   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:33.161930   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.161964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.161971   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.161975   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.165689   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.166478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.166488   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.166497   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.166503   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.169565   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.660809   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.660831   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.660841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.660846   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.664137   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.665061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.665071   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.665078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.665099   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.667811   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.161378   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.161391   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.161398   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.161403   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.165094   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:34.165523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.165531   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.165537   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.165540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.167949   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.661206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.661222   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.661228   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.661230   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.663772   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.664499   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.664507   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.664513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.664517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.666543   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.161667   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.161689   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.161700   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.161705   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.166875   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.167311   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.167319   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.167324   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.167328   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.172902   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.173323   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:35.661973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.661988   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.661994   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.661998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.664583   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.664981   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.664989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.664998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.665001   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.667322   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.161747   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.161785   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.161793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.161796   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.164939   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.165450   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.165459   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.165464   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.165474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.167808   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.661492   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.661508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.661532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.661537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.664941   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.665455   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.665464   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.665471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.665474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.668192   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.161660   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.161678   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.161685   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.161688   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.164012   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.164541   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.164549   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.164555   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.164558   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.166577   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.662457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.662494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.662505   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.662511   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.665311   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.666039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.666048   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.666053   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.666056   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.668294   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.668600   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:38.162628   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.162646   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.162654   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.162659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.165660   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.166284   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.166292   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.166298   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.166301   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.168559   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.662170   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.662185   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.662191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.662195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.664733   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.665194   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.665207   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.665211   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.667563   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.161491   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.161508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.161517   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.161522   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.164370   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.164762   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.164770   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.164776   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.164780   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.166614   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:39.661843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.661860   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.661866   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.661871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.664287   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.664950   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.664958   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.664964   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.664968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.667194   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.160891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.160921   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.160933   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.160955   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.165388   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:40.166039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.166047   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.166052   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.166055   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.168212   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.168635   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:40.661892   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.661907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.661914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.661917   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.664471   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.664962   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.664970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.664975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.664984   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.667379   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.160779   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.160797   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.160824   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.160830   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.163878   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:41.164433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.164441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.164446   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.164451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.166991   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.661124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.661138   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.661145   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.661149   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.663595   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.664206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.664214   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.664220   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.664224   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.666219   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:42.161906   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.161926   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.161937   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.161945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.165222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:42.165752   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.165760   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.165765   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.165769   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.167913   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.661255   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.661274   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.661282   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.661288   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.664242   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.664689   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.664697   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.664703   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.664706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.666742   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.667053   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:43.161512   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.161530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.161565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.161575   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.164590   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:43.165234   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.165242   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.165254   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.165258   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.167961   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.660826   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.660844   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.660873   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.660882   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.663557   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.663959   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.663966   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.663972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.663976   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.665816   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.162103   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.162133   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.162158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.162164   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.165060   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.165598   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.165606   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.165612   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.165615   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.167589   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.662307   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.662328   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.662339   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.662344   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.665063   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.665602   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.665610   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.665615   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.665619   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.667607   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.667948   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:45.161277   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.161307   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.161314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.161317   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.163751   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.164201   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.164209   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.164215   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.164217   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.166274   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.662080   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.662099   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.662106   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.662110   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.664692   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.665145   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.665152   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.665158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.665162   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.667158   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.161983   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.162002   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.162011   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.162016   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.165135   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.165638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.165645   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.165650   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.165654   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.167660   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.660973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.661022   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.661036   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.661046   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.664600   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.665041   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.665051   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.665056   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.665061   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.667006   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:47.161827   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.161883   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.161895   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.161902   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.165549   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:47.166029   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.166037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.166041   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.166045   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.168233   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:47.168577   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:47.661554   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.661603   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.661616   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.661625   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.665796   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:47.666259   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.666266   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.666272   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.666276   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.668466   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.161876   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.161891   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.161898   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.161901   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.164419   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.164835   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.164843   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.164849   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.164853   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.166837   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:48.661562   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.661577   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.661598   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.661603   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.663972   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.664457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.664465   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.664470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.664475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.666445   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:49.161410   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.161430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.161438   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.161443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.164478   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:49.164982   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.164989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.164995   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.164998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.167071   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.660698   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.660724   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.660736   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.660742   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.664916   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:49.665349   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.665357   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.665363   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.665367   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.667392   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.667753   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:50.161030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.161065   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.161073   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.161080   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.163537   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.163963   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.163970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.163975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.163979   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.166093   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.661184   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.661238   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.661263   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.661267   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.663637   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.664117   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.664125   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.664131   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.664134   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.666067   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.161515   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.161550   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.161557   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.161561   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.163979   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.164681   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.164690   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.164694   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.164697   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.166790   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.661266   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.661291   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.661374   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.661387   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.664772   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:51.665195   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.665206   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.665216   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.667400   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.667769   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.667779   12253 pod_ready.go:82] duration metric: took 21.007261829s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667785   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667821   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:51.667826   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.667831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.667836   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.669791   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.670205   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.670213   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.670218   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.670221   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.672346   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.672671   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.672679   12253 pod_ready.go:82] duration metric: took 4.889471ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672685   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672718   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:51.672723   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.672729   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.672737   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.674649   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.675030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.675037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.675043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.675046   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.676915   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.677288   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.677297   12253 pod_ready.go:82] duration metric: took 4.607311ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677303   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677339   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:51.677344   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.677349   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.677352   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.679418   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.679897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:51.679907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.679916   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.679920   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.681919   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.682327   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.682336   12253 pod_ready.go:82] duration metric: took 5.028149ms for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682343   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682376   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:51.682381   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.682386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.682389   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.684781   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.685200   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:51.685207   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.685212   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.685215   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.687181   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.687676   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.687685   12253 pod_ready.go:82] duration metric: took 5.337542ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.687696   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.862280   12253 request.go:632] Waited for 174.544275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862360   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862372   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.862382   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.862386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.865455   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.062085   12253 request.go:632] Waited for 196.080428ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.062136   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.062140   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.064928   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.065322   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.065331   12253 pod_ready.go:82] duration metric: took 377.628905ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.065338   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.261393   12253 request.go:632] Waited for 196.009549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261459   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261471   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.261485   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.261492   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.265336   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.461317   12253 request.go:632] Waited for 195.311084ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461362   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.461370   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.461376   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.464202   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.464645   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.464654   12253 pod_ready.go:82] duration metric: took 399.309786ms for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.464661   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.662233   12253 request.go:632] Waited for 197.535092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662290   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662297   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.662305   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.662311   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.665143   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.862031   12253 request.go:632] Waited for 196.411368ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862119   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.862140   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.862145   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.866136   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.866533   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.866543   12253 pod_ready.go:82] duration metric: took 401.876526ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.866550   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.061387   12253 request.go:632] Waited for 194.796135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061453   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061462   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.061470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.061476   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.064293   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:53.261526   12253 request.go:632] Waited for 196.74771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261649   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.261659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.261674   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.265603   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.266028   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.266036   12253 pod_ready.go:82] duration metric: took 399.480241ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.266042   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.461478   12253 request.go:632] Waited for 195.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461556   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461564   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.461571   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.461576   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.464932   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.661907   12253 request.go:632] Waited for 196.48537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661965   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661991   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.661998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.662002   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.665079   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.665555   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.665565   12253 pod_ready.go:82] duration metric: took 399.515968ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.665572   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.861347   12253 request.go:632] Waited for 195.73444ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861414   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861426   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.861434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.861439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.864177   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:54.061465   12253 request.go:632] Waited for 196.861398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061517   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061554   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.061565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.061570   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.064700   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.065020   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.065030   12253 pod_ready.go:82] duration metric: took 399.451485ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.065037   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.263289   12253 request.go:632] Waited for 198.174584ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263384   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263411   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.263436   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.263461   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.266722   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.461554   12253 request.go:632] Waited for 194.387224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461599   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461609   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.461620   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.461627   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.465162   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.465533   12253 pod_ready.go:98] node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465543   12253 pod_ready.go:82] duration metric: took 400.500434ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	E0906 12:06:54.465549   12253 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465555   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.662665   12253 request.go:632] Waited for 197.074891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662731   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662740   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.662749   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.662755   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.665777   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.862800   12253 request.go:632] Waited for 196.680356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862911   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862924   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.862936   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.862945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.866911   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.867361   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.867371   12253 pod_ready.go:82] duration metric: took 401.810264ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.867377   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.062512   12253 request.go:632] Waited for 195.060729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062609   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062629   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.062641   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.062648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.066272   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:55.263362   12253 request.go:632] Waited for 196.717271ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263483   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.263507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.263520   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.268072   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.268453   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.268462   12253 pod_ready.go:82] duration metric: took 401.079128ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.268469   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.462230   12253 request.go:632] Waited for 193.721938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462312   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462320   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.462348   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.462357   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.465173   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:55.662089   12253 request.go:632] Waited for 196.464134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662239   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662255   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.662267   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.662275   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.666427   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.666704   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.666714   12253 pod_ready.go:82] duration metric: took 398.240112ms for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.666721   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.861681   12253 request.go:632] Waited for 194.913797ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861767   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861778   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.861790   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.861799   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.865874   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:56.063343   12253 request.go:632] Waited for 197.091674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063491   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.063501   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.063508   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.067298   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.067689   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.067699   12253 pod_ready.go:82] duration metric: took 400.971333ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.067706   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.261328   12253 request.go:632] Waited for 193.578385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261416   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261431   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.261443   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.261451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.264964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.461367   12253 request.go:632] Waited for 196.051039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.461449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.461454   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.464367   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:56.464786   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.464799   12253 pod_ready.go:82] duration metric: took 397.083037ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.464806   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.662171   12253 request.go:632] Waited for 197.309952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662326   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662340   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.662352   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.662363   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.665960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.862106   12253 request.go:632] Waited for 195.559257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862214   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862225   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.862236   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.862243   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.866072   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.866312   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.866321   12253 pod_ready.go:82] duration metric: took 401.509457ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.866329   12253 pod_ready.go:39] duration metric: took 26.216828833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:56.866341   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:56.866386   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:56.878910   12253 api_server.go:72] duration metric: took 26.422463192s to wait for apiserver process to appear ...
	I0906 12:06:56.878922   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:56.878935   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:56.883745   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:56.883791   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:56.883796   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.883803   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.883808   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.884469   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:56.884556   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:56.884568   12253 api_server.go:131] duration metric: took 5.641059ms to wait for apiserver health ...
	I0906 12:06:56.884573   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:57.061374   12253 request.go:632] Waited for 176.731786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.061480   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.061487   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.066391   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:57.071924   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:57.071938   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.071942   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.071945   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.071948   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.071952   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.071955   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.071958   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.071962   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.071964   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.071967   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.071973   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.071977   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.071979   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.071982   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.071985   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.071988   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.071991   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.071993   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.071996   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.071999   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.072001   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.072004   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.072007   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.072009   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.072012   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.072017   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.072022   12253 system_pods.go:74] duration metric: took 187.444826ms to wait for pod list to return data ...
	I0906 12:06:57.072029   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:57.261398   12253 request.go:632] Waited for 189.325312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261443   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261451   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.261471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.261475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.264018   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:57.264078   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:57.264086   12253 default_sa.go:55] duration metric: took 192.051635ms for default service account to be created ...
	I0906 12:06:57.264103   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:57.461307   12253 request.go:632] Waited for 197.162907ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461342   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461347   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.461367   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.461393   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.466559   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:57.471959   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:57.471969   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.471974   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.471977   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.471981   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.471985   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.471989   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.471992   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.471994   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.471997   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.472000   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.472003   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.472006   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.472009   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.472012   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.472015   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.472017   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.472020   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.472023   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.472026   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.472029   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.472031   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.472034   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.472037   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.472040   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.472043   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.472047   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.472052   12253 system_pods.go:126] duration metric: took 207.94336ms to wait for k8s-apps to be running ...
	I0906 12:06:57.472059   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:57.472107   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:57.483773   12253 system_svc.go:56] duration metric: took 11.709185ms WaitForService to wait for kubelet
	I0906 12:06:57.483792   12253 kubeadm.go:582] duration metric: took 27.027343725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:57.483805   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:57.662348   12253 request.go:632] Waited for 178.494779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662425   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.662448   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.662457   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.665964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:57.666853   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666864   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666872   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666875   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666879   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666882   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666885   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666892   12253 node_conditions.go:105] duration metric: took 183.082589ms to run NodePressure ...
	I0906 12:06:57.666899   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:57.666913   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:57.689595   12253 out.go:201] 
	I0906 12:06:57.710968   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:57.711085   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.733311   12253 out.go:177] * Starting "ha-343000-m04" worker node in "ha-343000" cluster
	I0906 12:06:57.776497   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:57.776531   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:57.776758   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:57.776776   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:57.776887   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.777953   12253 start.go:360] acquireMachinesLock for ha-343000-m04: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:57.778066   12253 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "ha-343000-m04"
	I0906 12:06:57.778091   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:57.778100   12253 fix.go:54] fixHost starting: m04
	I0906 12:06:57.778535   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:57.778560   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:57.788011   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56397
	I0906 12:06:57.788364   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:57.788747   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:57.788763   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:57.789004   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:57.789119   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.789216   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:06:57.789290   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.789388   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:06:57.790320   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:06:57.790346   12253 fix.go:112] recreateIfNeeded on ha-343000-m04: state=Stopped err=<nil>
	I0906 12:06:57.790354   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	W0906 12:06:57.790423   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:57.811236   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m04" ...
	I0906 12:06:57.853317   12253 main.go:141] libmachine: (ha-343000-m04) Calling .Start
	I0906 12:06:57.853695   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.853752   12253 main.go:141] libmachine: (ha-343000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid
	I0906 12:06:57.853833   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Using UUID 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5
	I0906 12:06:57.879995   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Generated MAC 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.880018   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:57.880162   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880191   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880277   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:57.880319   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:57.880330   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:57.881745   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Pid is 12301
	I0906 12:06:57.882213   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Attempt 0
	I0906 12:06:57.882229   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.882285   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 12301
	I0906 12:06:57.884227   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Searching for 6a:d8:ba:fa:e9:e7 in /var/db/dhcpd_leases ...
	I0906 12:06:57.884329   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:57.884344   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:06:57.884361   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:57.884375   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:57.884400   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:57.884406   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetConfigRaw
	I0906 12:06:57.884413   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found match: 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.884464   12253 main.go:141] libmachine: (ha-343000-m04) DBG | IP: 192.169.0.27
	I0906 12:06:57.885084   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:06:57.885308   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.885947   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:57.885958   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.886118   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:06:57.886263   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:06:57.886401   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886518   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886625   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:06:57.886755   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:57.886913   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:06:57.886920   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:57.890225   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:57.898506   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:57.900023   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:57.900046   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:57.900059   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:57.900081   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.292623   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:58.292638   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:58.407402   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:58.407425   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:58.407438   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:58.407462   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.408295   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:58.408305   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:07:04.116677   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:07:04.116760   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:07:04.116771   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:07:04.140349   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:07:32.960229   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:07:32.960245   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960393   12253 buildroot.go:166] provisioning hostname "ha-343000-m04"
	I0906 12:07:32.960404   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960498   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:32.960578   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:32.960651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960733   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960822   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:32.960938   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:32.961089   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:32.961097   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m04 && echo "ha-343000-m04" | sudo tee /etc/hostname
	I0906 12:07:33.029657   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m04
	
	I0906 12:07:33.029671   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.029803   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.029895   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.029994   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.030077   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.030212   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.030354   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.030365   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:07:33.094966   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:07:33.094982   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:07:33.094992   12253 buildroot.go:174] setting up certificates
	I0906 12:07:33.094999   12253 provision.go:84] configureAuth start
	I0906 12:07:33.095005   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:33.095148   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:33.095261   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.095345   12253 provision.go:143] copyHostCerts
	I0906 12:07:33.095383   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095445   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:07:33.095451   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095595   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:07:33.095788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095828   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:07:33.095833   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095913   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:07:33.096069   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096123   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:07:33.096133   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096216   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:07:33.096362   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m04 san=[127.0.0.1 192.169.0.27 ha-343000-m04 localhost minikube]
	I0906 12:07:33.148486   12253 provision.go:177] copyRemoteCerts
	I0906 12:07:33.148536   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:07:33.148551   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.148688   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.148785   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.148886   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.148968   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:33.184847   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:07:33.184925   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:07:33.204793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:07:33.204868   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:07:33.225189   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:07:33.225262   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:07:33.245047   12253 provision.go:87] duration metric: took 150.030083ms to configureAuth
	I0906 12:07:33.245064   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:07:33.245233   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:07:33.245264   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:33.245394   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.245474   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.245563   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245656   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245735   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.245857   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.245998   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.246006   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:07:33.305766   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:07:33.305779   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:07:33.305852   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:07:33.305865   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.305998   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.306097   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306198   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306282   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.306410   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.306555   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.306603   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:07:33.377062   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	Environment=NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:07:33.377081   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.377218   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.377309   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377395   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377470   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.377595   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.377731   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.377745   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:07:34.969419   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:07:34.969435   12253 machine.go:96] duration metric: took 37.07976383s to provisionDockerMachine
	I0906 12:07:34.969443   12253 start.go:293] postStartSetup for "ha-343000-m04" (driver="hyperkit")
	I0906 12:07:34.969451   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:07:34.969464   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:34.969653   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:07:34.969667   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:34.969755   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:34.969839   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:34.969938   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:34.970026   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.005883   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:07:35.009124   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:07:35.009135   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:07:35.009234   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:07:35.009411   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:07:35.009418   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:07:35.009642   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:07:35.017147   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:07:35.037468   12253 start.go:296] duration metric: took 68.014068ms for postStartSetup
	I0906 12:07:35.037488   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.037659   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:07:35.037673   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.037762   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.037851   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.037939   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.038032   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.073675   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:07:35.073738   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:07:35.107246   12253 fix.go:56] duration metric: took 37.325422655s for fixHost
	I0906 12:07:35.107273   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.107423   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.107527   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107605   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107700   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.107824   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:35.107967   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:35.107979   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:07:35.169429   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649655.267789382
	
	I0906 12:07:35.169443   12253 fix.go:216] guest clock: 1725649655.267789382
	I0906 12:07:35.169449   12253 fix.go:229] Guest: 2024-09-06 12:07:35.267789382 -0700 PDT Remote: 2024-09-06 12:07:35.107262 -0700 PDT m=+153.317111189 (delta=160.527382ms)
	I0906 12:07:35.169466   12253 fix.go:200] guest clock delta is within tolerance: 160.527382ms
	I0906 12:07:35.169472   12253 start.go:83] releasing machines lock for "ha-343000-m04", held for 37.387671405s
	I0906 12:07:35.169494   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.169634   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:35.192021   12253 out.go:177] * Found network options:
	I0906 12:07:35.212912   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	W0906 12:07:35.233597   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233618   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233628   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.233643   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234159   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234366   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234455   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:07:35.234491   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	W0906 12:07:35.234542   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234565   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234576   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.234648   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:07:35.234651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234665   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.234826   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234871   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235007   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235056   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235182   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235206   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.235315   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	W0906 12:07:35.268496   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:07:35.268557   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:07:35.318514   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:07:35.318528   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.318592   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.333874   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:07:35.343295   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:07:35.352492   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.352552   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:07:35.361630   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.370668   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:07:35.379741   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.389143   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:07:35.398542   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:07:35.407763   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:07:35.416819   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:07:35.426383   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:07:35.434689   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:07:35.442821   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:35.546285   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:07:35.565383   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.565458   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:07:35.587708   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.599182   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:07:35.618394   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.629619   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.640716   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:07:35.663169   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.673665   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.688883   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:07:35.691747   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:07:35.698972   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:07:35.712809   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:07:35.816741   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:07:35.926943   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.926972   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:07:35.942083   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:36.036699   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:08:37.056745   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01976389s)
	I0906 12:08:37.056810   12253 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:08:37.092348   12253 out.go:201] 
	W0906 12:08:37.113034   12253 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:07:33 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388087675Z" level=info msg="Starting up"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388874857Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.389448447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.406541023Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421511237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421602459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421668995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421705837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421880023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421931200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422075608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422118185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422150327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422179563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422320644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422541368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424094220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424143575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424295349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424338381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424460558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424511586Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425636722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425688205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425727379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425760048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425791193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425860087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426020444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426094135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426129732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426167338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426204356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426237806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426268346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426298666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426328562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426358230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426389211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426418321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426456445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426487889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426516746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426546507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426578999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426618589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426715802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426750125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426780114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426818663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426851076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426879866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426909029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426949139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426988055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427021053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427049769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427133633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427177682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427207151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427236043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427298115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427372740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427431600Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427611432Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427700568Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427760941Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427803687Z" level=info msg="containerd successfully booted in 0.022207s"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.407865115Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.420336385Z" level=info msg="Loading containers: start."
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.515687290Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.987987334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.032534306Z" level=info msg="Loading containers: done."
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.046984897Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.047174717Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066396312Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066609197Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:07:35 ha-343000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.147371084Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:07:36 ha-343000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.149138373Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.151983630Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152081675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152156440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:37 ha-343000-m04 dockerd[1111]: time="2024-09-06T19:07:37.182746438Z" level=info msg="Starting up"
	Sep 06 19:08:37 ha-343000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:07:33 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388087675Z" level=info msg="Starting up"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388874857Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.389448447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.406541023Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421511237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421602459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421668995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421705837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421880023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421931200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422075608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422118185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422150327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422179563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422320644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422541368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424094220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424143575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424295349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424338381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424460558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424511586Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425636722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425688205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425727379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425760048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425791193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425860087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426020444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426094135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426129732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426167338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426204356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426237806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426268346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426298666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426328562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426358230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426389211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426418321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426456445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426487889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426516746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426546507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426578999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426618589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426715802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426750125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426780114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426818663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426851076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426879866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426909029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426949139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426988055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427021053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427049769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427133633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427177682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427207151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427236043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427298115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427372740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427431600Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427611432Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427700568Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427760941Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427803687Z" level=info msg="containerd successfully booted in 0.022207s"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.407865115Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.420336385Z" level=info msg="Loading containers: start."
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.515687290Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.987987334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.032534306Z" level=info msg="Loading containers: done."
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.046984897Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.047174717Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066396312Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066609197Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:07:35 ha-343000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.147371084Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:07:36 ha-343000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.149138373Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.151983630Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152081675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152156440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:37 ha-343000-m04 dockerd[1111]: time="2024-09-06T19:07:37.182746438Z" level=info msg="Starting up"
	Sep 06 19:08:37 ha-343000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:08:37.113090   12253 out.go:270] * 
	* 
	W0906 12:08:37.114019   12253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:08:37.156019   12253 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-343000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (3.23696243s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	| node    | ha-343000 node delete m03 -v=7                                                                                               | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-343000 stop -v=7                                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT | 06 Sep 24 12:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true                                                                                                     | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:05 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:05:01
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:05:01.821113   12253 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:05:01.821396   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821403   12253 out.go:358] Setting ErrFile to fd 2...
	I0906 12:05:01.821407   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821585   12253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:05:01.822962   12253 out.go:352] Setting JSON to false
	I0906 12:05:01.845482   12253 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11072,"bootTime":1725638429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:05:01.845567   12253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:05:01.867344   12253 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:05:01.909192   12253 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:05:01.909251   12253 notify.go:220] Checking for updates...
	I0906 12:05:01.951681   12253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:01.972896   12253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:05:01.993997   12253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:05:02.014915   12253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:05:02.036376   12253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:05:02.058842   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:02.059362   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.059426   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.069603   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56303
	I0906 12:05:02.069962   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.070394   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.070407   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.070602   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.070721   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.070905   12253 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:05:02.071152   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.071173   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.079785   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56305
	I0906 12:05:02.080100   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.080480   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.080508   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.080753   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.080876   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.109151   12253 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:05:02.151203   12253 start.go:297] selected driver: hyperkit
	I0906 12:05:02.151225   12253 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gv
isor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.151398   12253 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:05:02.151526   12253 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.151681   12253 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:05:02.160708   12253 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:05:02.164397   12253 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.164417   12253 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:05:02.167034   12253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:05:02.167076   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:02.167082   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:02.167157   12253 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.167283   12253 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.209167   12253 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:05:02.230210   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:02.230284   12253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:05:02.230304   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:02.230523   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:02.230539   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:02.230657   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.231246   12253 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:02.231321   12253 start.go:364] duration metric: took 58.855µs to acquireMachinesLock for "ha-343000"
	I0906 12:05:02.231338   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:02.231348   12253 fix.go:54] fixHost starting: 
	I0906 12:05:02.231579   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.231602   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.240199   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56307
	I0906 12:05:02.240538   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.240898   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.240906   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.241115   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.241241   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.241344   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:02.241429   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.241509   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:05:02.242441   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 12107 missing from process table
	I0906 12:05:02.242473   12253 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:05:02.242488   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:05:02.242570   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:02.285299   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:05:02.308252   12253 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:05:02.308536   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.308568   12253 main.go:141] libmachine: (ha-343000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:05:02.308690   12253 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:05:02.418778   12253 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:05:02.418805   12253 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:02.418989   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419036   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419095   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:02.419142   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:02.419160   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:02.420829   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Pid is 12266
	I0906 12:05:02.421178   12253 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:05:02.421194   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.421256   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:02.422249   12253 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:05:02.422316   12253 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:02.422340   12253 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66db525c}
	I0906 12:05:02.422356   12253 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:05:02.422371   12253 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:05:02.422430   12253 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:05:02.423159   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:02.423357   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.423787   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:02.423798   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.423945   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:02.424057   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:02.424240   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424491   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:02.424632   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:02.424882   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:02.424892   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:02.428574   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:02.479264   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:02.479938   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.479953   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.479971   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.479984   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.867700   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:02.867715   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:02.983045   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.983079   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.983090   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.983110   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.983957   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:02.983967   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:08.596032   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:05:08.596072   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:05:08.596081   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:05:08.620302   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:05:13.496727   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:13.496743   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.496887   12253 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:05:13.496898   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.497005   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.497091   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.497190   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497290   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497391   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.497515   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.497658   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.497666   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:05:13.573506   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:05:13.573525   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.573649   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.573744   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573841   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573933   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.574054   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.574199   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.574210   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:13.646449   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:13.646474   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:13.646492   12253 buildroot.go:174] setting up certificates
	I0906 12:05:13.646500   12253 provision.go:84] configureAuth start
	I0906 12:05:13.646506   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.646647   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:13.646742   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.646835   12253 provision.go:143] copyHostCerts
	I0906 12:05:13.646872   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.646964   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:13.646972   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.647092   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:13.647297   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647337   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:13.647342   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647419   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:13.647566   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647604   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:13.647609   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647688   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:13.647833   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:05:13.694032   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:13.694082   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:13.694097   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.694208   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.694294   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.694394   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.694509   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:13.734054   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:13.734119   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:13.754153   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:13.754219   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 12:05:13.773776   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:13.773840   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:05:13.793258   12253 provision.go:87] duration metric: took 146.744964ms to configureAuth
	I0906 12:05:13.793272   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:13.793440   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:13.793455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:13.793596   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.793699   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.793786   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793872   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793955   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.794076   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.794207   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.794215   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:13.860967   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:13.860981   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:13.861068   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:13.861082   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.861205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.861297   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861411   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861521   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.861683   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.861822   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.861868   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:13.937805   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:13.937827   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.937964   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.938080   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938295   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.938419   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.938558   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.938571   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:15.619728   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:15.619742   12253 machine.go:96] duration metric: took 13.195921245s to provisionDockerMachine
	I0906 12:05:15.619754   12253 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:05:15.619762   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:15.619772   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.619950   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:15.619966   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.620058   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.620154   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.620257   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.620337   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.660028   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:15.663309   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:15.663323   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:15.663418   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:15.663631   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:15.663638   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:15.663848   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:15.671393   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:15.691128   12253 start.go:296] duration metric: took 71.364923ms for postStartSetup
	I0906 12:05:15.691156   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.691327   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:15.691341   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.691453   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.691544   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.691628   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.691712   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.732095   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:15.732157   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:15.785220   12253 fix.go:56] duration metric: took 13.553838389s for fixHost
	I0906 12:05:15.785242   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.785373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.785462   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785558   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785650   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.785774   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:15.785926   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:15.785933   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:15.851168   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649515.950195219
	
	I0906 12:05:15.851179   12253 fix.go:216] guest clock: 1725649515.950195219
	I0906 12:05:15.851184   12253 fix.go:229] Guest: 2024-09-06 12:05:15.950195219 -0700 PDT Remote: 2024-09-06 12:05:15.785232 -0700 PDT m=+13.999000936 (delta=164.963219ms)
	I0906 12:05:15.851205   12253 fix.go:200] guest clock delta is within tolerance: 164.963219ms
	I0906 12:05:15.851209   12253 start.go:83] releasing machines lock for "ha-343000", held for 13.619855055s
	I0906 12:05:15.851228   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851359   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:15.851455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851761   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851860   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851943   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:15.851974   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852006   12253 ssh_runner.go:195] Run: cat /version.json
	I0906 12:05:15.852029   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852070   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852126   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852163   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852217   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852273   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852292   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852391   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.852414   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.945582   12253 ssh_runner.go:195] Run: systemctl --version
	I0906 12:05:15.950518   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:05:15.954710   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:15.954750   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:15.972724   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:15.972739   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:15.972842   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:15.997626   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:16.009969   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:16.021002   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.021063   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:16.029939   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.039024   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:16.047772   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.056625   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:16.065543   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:16.074247   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:16.082976   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:16.091738   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:16.099691   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:16.107701   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.207522   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:16.227285   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:16.227363   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:16.242536   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.255682   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:16.272770   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.283410   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.293777   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:16.316221   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.326357   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:16.341265   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:16.344224   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:16.351341   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:16.364686   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:16.462680   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:16.567102   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.567167   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:16.581141   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.682906   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:19.018795   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.33586105s)
	I0906 12:05:19.018863   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:19.029907   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:19.042839   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.053183   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:19.161103   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:19.269627   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.376110   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:19.389292   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.400498   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.508773   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:19.574293   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:19.574369   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:19.578648   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:19.578702   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:19.581725   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:19.611289   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:19.611360   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.628755   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.690349   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:19.690435   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:19.690798   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:19.695532   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.705484   12253 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:05:19.705569   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:19.705619   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.718680   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.718691   12253 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:05:19.718764   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.731988   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.732008   12253 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:05:19.732017   12253 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:05:19.732095   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:19.732160   12253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:05:19.769790   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:19.769810   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:19.769820   12253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:05:19.769836   12253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:05:19.769924   12253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:05:19.769938   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:19.769993   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:19.783021   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:19.783091   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:19.783139   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:19.790731   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:19.790780   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:05:19.798087   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:05:19.811294   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:19.826571   12253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:05:19.840214   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:19.853805   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:19.856803   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.866597   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.969582   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:19.984116   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:05:19.984128   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:19.984139   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:19.984324   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:19.984402   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:19.984413   12253 certs.go:256] generating profile certs ...
	I0906 12:05:19.984529   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:19.984611   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:05:19.984683   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:19.984690   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:19.984715   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:19.984733   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:19.984750   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:19.984767   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:19.984795   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:19.984823   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:19.984846   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:19.984950   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:19.984995   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:19.985004   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:19.985045   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:19.985074   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:19.985102   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:19.985164   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:19.985201   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:19.985223   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:19.985241   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:19.985738   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:20.016977   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:20.040002   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:20.074896   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:20.096785   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:20.117992   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:20.152101   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:20.181980   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:20.249104   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:20.310747   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:20.334377   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:20.354759   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:05:20.368573   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:20.372727   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:20.381943   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385218   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385254   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.389369   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:20.398370   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:20.407468   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410735   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410769   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.414896   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:20.423953   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:20.432893   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436127   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436161   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.440280   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:20.449469   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:20.452834   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:20.457085   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:20.461715   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:20.466070   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:20.470282   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:20.474449   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:20.478690   12253 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:20.478796   12253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:05:20.491888   12253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:05:20.500336   12253 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:05:20.500348   12253 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:05:20.500388   12253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:05:20.508605   12253 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:05:20.508923   12253 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.509004   12253 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:05:20.509222   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.509871   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.510072   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:05:20.510389   12253 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:05:20.510569   12253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:05:20.518433   12253 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:05:20.518445   12253 kubeadm.go:597] duration metric: took 18.093623ms to restartPrimaryControlPlane
	I0906 12:05:20.518450   12253 kubeadm.go:394] duration metric: took 39.76917ms to StartCluster
	I0906 12:05:20.518463   12253 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.518535   12253 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.518965   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.519194   12253 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:20.519207   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:05:20.519217   12253 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:05:20.519329   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.562952   12253 out.go:177] * Enabled addons: 
	I0906 12:05:20.584902   12253 addons.go:510] duration metric: took 65.689522ms for enable addons: enabled=[]
	I0906 12:05:20.584940   12253 start.go:246] waiting for cluster config update ...
	I0906 12:05:20.584973   12253 start.go:255] writing updated cluster config ...
	I0906 12:05:20.608171   12253 out.go:201] 
	I0906 12:05:20.630349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.630488   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.652951   12253 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:05:20.695164   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:20.695203   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:20.695405   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:20.695421   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:20.695517   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.696367   12253 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:20.696454   12253 start.go:364] duration metric: took 67.794µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:05:20.696472   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:20.696479   12253 fix.go:54] fixHost starting: m02
	I0906 12:05:20.696771   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:20.696805   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:20.705845   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56329
	I0906 12:05:20.706183   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:20.706528   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:20.706543   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:20.706761   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:20.706875   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.706980   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:05:20.707064   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.707136   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:05:20.708055   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.708088   12253 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:05:20.708098   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:05:20.708185   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:20.734735   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:05:20.776747   12253 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:05:20.777073   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.777115   12253 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:05:20.778701   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.778717   12253 main.go:141] libmachine: (ha-343000-m02) DBG | pid 12118 is in state "Stopped"
	I0906 12:05:20.778778   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:05:20.779095   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:05:20.806950   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:05:20.806972   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:20.807155   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807233   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807304   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:20.807361   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:20.807374   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:20.808851   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Pid is 12276
	I0906 12:05:20.809435   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:05:20.809451   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.809514   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12276
	I0906 12:05:20.811081   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:05:20.811162   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:20.811181   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:05:20.811209   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca2f2}
	I0906 12:05:20.811220   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:05:20.811238   12253 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:05:20.811245   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:05:20.811904   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:20.812111   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.812569   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:20.812582   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.812711   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:20.812849   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:20.812941   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813131   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:20.813262   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:20.813401   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:20.813411   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:20.817160   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:20.825311   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:20.826263   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:20.826278   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:20.826305   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:20.826316   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.214947   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:21.214961   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:21.329668   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:21.329695   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:21.329711   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:21.329721   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.330549   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:21.330560   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:26.960134   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:05:26.960175   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:05:26.960183   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:05:26.984271   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:05:30.128139   12253 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.25:22: connect: connection refused
	I0906 12:05:33.191918   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:33.191932   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192104   12253 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:05:33.192113   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192203   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.192293   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.192374   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192456   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192573   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.192685   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.192834   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.192848   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:05:33.271080   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:05:33.271107   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.271242   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.271343   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271432   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271517   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.271653   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.271816   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.271828   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:33.340749   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:33.340766   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:33.340776   12253 buildroot.go:174] setting up certificates
	I0906 12:05:33.340781   12253 provision.go:84] configureAuth start
	I0906 12:05:33.340788   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.340917   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:33.341015   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.341102   12253 provision.go:143] copyHostCerts
	I0906 12:05:33.341127   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341183   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:33.341189   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341303   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:33.341481   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341516   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:33.341521   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341626   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:33.341793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341824   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:33.341829   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341902   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:33.342105   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:05:33.430053   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:33.430099   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:33.430112   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.430247   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.430337   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.430424   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.430498   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:33.468786   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:33.468854   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:33.488429   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:33.488502   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:05:33.507788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:33.507853   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:05:33.527149   12253 provision.go:87] duration metric: took 186.359429ms to configureAuth
	I0906 12:05:33.527164   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:33.527349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:33.527363   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:33.527493   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.527581   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.527670   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527752   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527834   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.527941   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.528081   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.528089   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:33.592983   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:33.592995   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:33.593066   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:33.593077   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.593197   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.593303   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593392   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593487   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.593630   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.593775   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.593821   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:33.669226   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:33.669253   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.669404   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.669513   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669628   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669726   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.669876   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.670026   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.670038   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:35.327313   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:35.327328   12253 machine.go:96] duration metric: took 14.51472045s to provisionDockerMachine
	I0906 12:05:35.327335   12253 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:05:35.327345   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:35.327357   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.327550   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:35.327564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.327658   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.327737   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.327824   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.327895   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.374953   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:35.380104   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:35.380118   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:35.380209   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:35.380346   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:35.380353   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:35.380535   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:35.392904   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:35.425316   12253 start.go:296] duration metric: took 97.970334ms for postStartSetup
	I0906 12:05:35.425336   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.425510   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:35.425521   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.425611   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.425700   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.425784   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.425866   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.465210   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:35.465270   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:35.519276   12253 fix.go:56] duration metric: took 14.822763667s for fixHost
	I0906 12:05:35.519322   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.519466   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.519564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519682   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519766   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.519897   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:35.520049   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:35.520058   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:35.586671   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649535.517793561
	
	I0906 12:05:35.586682   12253 fix.go:216] guest clock: 1725649535.517793561
	I0906 12:05:35.586690   12253 fix.go:229] Guest: 2024-09-06 12:05:35.517793561 -0700 PDT Remote: 2024-09-06 12:05:35.519294 -0700 PDT m=+33.733024449 (delta=-1.500439ms)
	I0906 12:05:35.586700   12253 fix.go:200] guest clock delta is within tolerance: -1.500439ms
	I0906 12:05:35.586703   12253 start.go:83] releasing machines lock for "ha-343000-m02", held for 14.890212868s
	I0906 12:05:35.586719   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.586869   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:35.609959   12253 out.go:177] * Found network options:
	I0906 12:05:35.631361   12253 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:05:35.652026   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.652053   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652675   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652820   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652904   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:35.652927   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:05:35.652986   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.653055   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:05:35.653068   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.653078   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653249   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653283   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653371   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653405   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653519   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653550   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.653617   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:05:35.689663   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:35.689725   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:35.741169   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:35.741183   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.741249   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:35.756280   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:35.765285   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:35.774250   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:35.774298   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:35.783141   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.792103   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:35.800998   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.809931   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:35.818930   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:35.828100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:35.837011   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:35.846071   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:35.854051   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:35.862225   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:35.953449   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:35.973036   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.973102   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:35.989701   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.002119   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:36.020969   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.032323   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.043370   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:36.064919   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.076134   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:36.091185   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:36.094041   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:36.101975   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:36.115524   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:36.210477   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:36.307446   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:36.307474   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:36.321506   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:36.425142   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:38.743512   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31834803s)
	I0906 12:05:38.743573   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:38.754689   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:38.767595   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:38.778550   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:38.871803   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:38.967444   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.077912   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:39.091499   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:39.102647   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.199868   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:39.269396   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:39.269473   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:39.274126   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:39.274176   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:39.279526   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:39.307628   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:39.307702   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.324272   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.363496   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:39.384323   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:05:39.405031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:39.405472   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:39.410152   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:39.420507   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:05:39.420684   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:39.420907   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.420932   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.430101   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56352
	I0906 12:05:39.430438   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.430796   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.430812   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.431028   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.431139   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:39.431212   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:39.431285   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:39.432244   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:05:39.432496   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.432518   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.441251   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56354
	I0906 12:05:39.441578   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.441903   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.441918   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.442138   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.442248   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:39.442348   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.25
	I0906 12:05:39.442355   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:39.442365   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:39.442516   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:39.442578   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:39.442588   12253 certs.go:256] generating profile certs ...
	I0906 12:05:39.442681   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:39.442772   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.7390dc12
	I0906 12:05:39.442830   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:39.442838   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:39.442859   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:39.442879   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:39.442896   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:39.442915   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:39.442951   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:39.442970   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:39.442987   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:39.443067   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:39.443106   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:39.443114   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:39.443147   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:39.443183   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:39.443212   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:39.443276   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:39.443310   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.443336   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.443355   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.443381   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:39.443473   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:39.443566   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:39.443662   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:39.443742   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:39.474601   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:05:39.477773   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:05:39.486087   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:05:39.489291   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:05:39.497797   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:05:39.500976   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:05:39.508902   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:05:39.512097   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:05:39.522208   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:05:39.529029   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:05:39.538558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:05:39.541788   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:05:39.551255   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:39.571163   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:39.590818   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:39.610099   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:39.629618   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:39.649203   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:39.668940   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:39.688319   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:39.707568   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:39.727593   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:39.746946   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:39.766191   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:05:39.779761   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:05:39.793389   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:05:39.807028   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:05:39.820798   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:05:39.834428   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:05:39.848169   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:05:39.861939   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:39.866268   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:39.875520   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878895   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878936   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.883242   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:39.892394   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:39.901475   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904880   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904919   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.909164   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:39.918366   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:39.927561   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.930968   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.931005   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.935325   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:39.944442   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:39.947919   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:39.952225   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:39.956510   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:39.960794   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:39.965188   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:39.969546   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:39.973805   12253 kubeadm.go:934] updating node {m02 192.169.0.25 8443 v1.31.0 docker true true} ...
	I0906 12:05:39.973869   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:39.973885   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:39.973920   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:39.987092   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:39.987133   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:39.987182   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:39.995535   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:39.995584   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:05:40.003762   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:05:40.017266   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:40.030719   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:40.044348   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:40.047310   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:40.057546   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.156340   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.171403   12253 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:40.171578   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:40.192574   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:05:40.213457   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.344499   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.359579   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:40.359776   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:05:40.359813   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:05:40.359973   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:05:40.360058   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:40.360063   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:40.360071   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:40.360075   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:47.989850   12253 round_trippers.go:574] Response Status:  in 7629 milliseconds
	I0906 12:05:48.990862   12253 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990895   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:48.990902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:48.990922   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:49.992764   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:49.992860   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:56357->192.169.0.24:8443: read: connection reset by peer
	I0906 12:05:49.992914   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:49.992923   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:49.992931   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:49.992938   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:50.992884   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:50.992985   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:50.992993   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:50.993001   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:50.993007   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:51.994156   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:51.994218   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:51.994272   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:51.994282   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:51.994293   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:51.994300   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:52.994610   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:52.994678   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:52.994684   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:52.994690   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:52.994695   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:53.996452   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:53.996513   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:53.996568   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:53.996577   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:53.996587   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:53.996600   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:54.996281   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:54.996431   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:54.996445   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:54.996456   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:54.996470   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997732   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:55.997791   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:55.997834   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:55.997841   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:55.997848   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997855   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998659   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:56.998737   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:56.998743   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:56.998748   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998753   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:57.998704   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:57.998768   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:57.998824   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:57.998830   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:57.998841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:57.998847   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.234879   12253 round_trippers.go:574] Response Status: 200 OK in 2236 milliseconds
	I0906 12:06:00.235584   12253 node_ready.go:49] node "ha-343000-m02" has status "Ready":"True"
	I0906 12:06:00.235597   12253 node_ready.go:38] duration metric: took 19.875567395s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:06:00.235604   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:00.235643   12253 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:06:00.235653   12253 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:06:00.235696   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:00.235701   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.235707   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.235711   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.262088   12253 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0906 12:06:00.268356   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.268408   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:00.268414   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.268421   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.268427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.271139   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.271625   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.271633   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.271638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.271642   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.273753   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.274136   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.274144   12253 pod_ready.go:82] duration metric: took 5.774893ms for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274150   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274179   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:00.274184   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.274189   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.274192   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.275924   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.276344   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.276351   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.276355   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.276360   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278001   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.278322   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.278329   12253 pod_ready.go:82] duration metric: took 4.174121ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278335   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278363   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:00.278368   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.278373   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278379   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.280145   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.280523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.280530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.280535   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.280540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.282107   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.282477   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.282486   12253 pod_ready.go:82] duration metric: took 4.146745ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282492   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282522   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.282528   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.282534   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.282537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.284223   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.284663   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.284670   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.284676   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.284679   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.286441   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.782726   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.782751   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.782796   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.782807   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.786175   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:00.786692   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.786700   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.786706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.786710   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.788874   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.283655   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.283671   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.283678   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.283683   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.285985   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.286465   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.286473   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.286481   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.286485   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.288565   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.782633   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.782651   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.782659   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.782664   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.785843   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:01.786296   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.786304   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.786309   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.786314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.788345   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.788771   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.788779   12253 pod_ready.go:82] duration metric: took 1.506279407s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788786   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788823   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:01.788828   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.788833   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.788838   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.790798   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:01.791160   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:01.791171   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.791184   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.791187   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.793250   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.793611   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.793620   12253 pod_ready.go:82] duration metric: took 4.828788ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.793631   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.837481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:01.837495   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.837504   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.837509   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.840718   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.037469   12253 request.go:632] Waited for 196.356353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037506   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037512   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.037520   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.037525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.040221   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.040550   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:02.040560   12253 pod_ready.go:82] duration metric: took 246.922589ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.040567   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.237374   12253 request.go:632] Waited for 196.770161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237419   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.237436   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.237442   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.240098   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.437383   12253 request.go:632] Waited for 196.723319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437429   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.437443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.437449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.440277   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.636447   12253 request.go:632] Waited for 94.227022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636509   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636516   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.636524   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.636528   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.640095   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.837639   12253 request.go:632] Waited for 197.104367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837707   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837717   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.837763   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.837788   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.841651   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:03.040768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.040781   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.040789   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.040793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.043403   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:03.236506   12253 request.go:632] Waited for 192.559607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236606   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236618   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.236631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.236637   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.240751   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.540928   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.540954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.540973   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.540980   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.545016   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.637802   12253 request.go:632] Waited for 92.404425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637881   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637890   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.637902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.637910   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.642163   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.041768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.041794   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.041804   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.041813   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.046193   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.047251   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.047260   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.047266   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.047277   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.056137   12253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 12:06:04.056428   12253 pod_ready.go:103] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:04.541406   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.541425   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.541434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.541439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.544224   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:04.544684   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.544691   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.544697   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.544707   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.547090   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.040907   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:05.040922   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.040930   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.040934   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.044733   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.045134   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:05.045143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.045149   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.045152   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.047168   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.047571   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.047581   12253 pod_ready.go:82] duration metric: took 3.007003521s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047587   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047621   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:05.047626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.047631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.047636   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.049432   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:05.236368   12253 request.go:632] Waited for 186.419986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236497   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236514   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.236525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.236532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.239828   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.240204   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.240214   12253 pod_ready.go:82] duration metric: took 192.620801ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.240220   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.435846   12253 request.go:632] Waited for 195.558833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435906   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.435914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.435921   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.438946   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.636650   12253 request.go:632] Waited for 197.107158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636711   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636719   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.636728   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.636733   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.639926   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.640212   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.640221   12253 pod_ready.go:82] duration metric: took 399.995302ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.640232   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.837401   12253 request.go:632] Waited for 197.103806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837486   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.837513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.837523   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.840662   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.035821   12253 request.go:632] Waited for 194.603254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035950   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.035962   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.035968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.039252   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.039561   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.039571   12253 pod_ready.go:82] duration metric: took 399.332528ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.039578   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.236804   12253 request.go:632] Waited for 197.127943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236841   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236849   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.236856   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.236861   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.239571   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.435983   12253 request.go:632] Waited for 195.836904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436083   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436095   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.436107   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.436115   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.440028   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.440297   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.440306   12253 pod_ready.go:82] duration metric: took 400.722778ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.440313   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.635911   12253 request.go:632] Waited for 195.558637ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635989   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635997   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.636005   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.636009   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.638766   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.836563   12253 request.go:632] Waited for 197.42239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836630   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836640   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.836651   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.836656   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.840182   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.840437   12253 pod_ready.go:93] pod "kube-proxy-8hww6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.840446   12253 pod_ready.go:82] duration metric: took 400.127213ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.840453   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.036000   12253 request.go:632] Waited for 195.50345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036078   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.036093   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.036101   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.039960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.237550   12253 request.go:632] Waited for 197.186932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237618   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237627   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.237638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.237645   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.241824   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:07.242186   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.242196   12253 pod_ready.go:82] duration metric: took 401.736827ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.242202   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.437080   12253 request.go:632] Waited for 194.824311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437120   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437127   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.437134   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.437177   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.439746   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:07.636668   12253 request.go:632] Waited for 196.435868ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636764   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636773   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.636784   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.636790   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.640555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.640971   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.640979   12253 pod_ready.go:82] duration metric: took 398.771488ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.640986   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.837782   12253 request.go:632] Waited for 196.72045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837885   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837895   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.837913   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.841222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.037474   12253 request.go:632] Waited for 195.707367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037543   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037551   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.037559   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.037564   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.041008   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.237863   12253 request.go:632] Waited for 96.589125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238009   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238027   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.238039   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.238064   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.241278   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.436102   12253 request.go:632] Waited for 194.439362ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436137   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.436151   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.436183   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.439043   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:08.642356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.642376   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.642388   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.642397   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.645933   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.837859   12253 request.go:632] Waited for 191.363155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837895   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837900   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.837911   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.841081   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.141167   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.141182   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.141191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.141195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.144158   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.235895   12253 request.go:632] Waited for 91.258445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235957   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.235972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.235977   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.239065   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.641494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.641508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.641517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.641521   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.644350   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.644757   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.644765   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.644771   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.644774   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.647091   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.647426   12253 pod_ready.go:103] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:10.141899   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:10.141923   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.141934   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.141941   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.145540   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.145973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.145981   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.145987   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.145989   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.148176   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.148538   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.148547   12253 pod_ready.go:82] duration metric: took 2.507551998s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.148554   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.235772   12253 request.go:632] Waited for 87.183047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235805   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235811   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.235831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.235849   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.238046   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.437551   12253 request.go:632] Waited for 199.151796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437619   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.437643   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.437648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.440639   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.440964   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.440974   12253 pod_ready.go:82] duration metric: took 292.414078ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.440981   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.636354   12253 request.go:632] Waited for 195.279783ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636426   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636437   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.636450   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.636456   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.641024   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:10.836907   12253 request.go:632] Waited for 195.513588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.836991   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.837001   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.837012   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.837020   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.840787   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.841194   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.841203   12253 pod_ready.go:82] duration metric: took 400.216153ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.841209   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.036390   12253 request.go:632] Waited for 195.137597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036488   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.036510   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.036517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.040104   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:11.236464   12253 request.go:632] Waited for 195.741522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.236507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.236513   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.244008   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:11.244389   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:11.244399   12253 pod_ready.go:82] duration metric: took 403.184015ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.244409   12253 pod_ready.go:39] duration metric: took 11.008775818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:11.244428   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:11.244490   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:11.260044   12253 api_server.go:72] duration metric: took 31.088552933s to wait for apiserver process to appear ...
	I0906 12:06:11.260057   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:11.260076   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:11.268665   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:11.268720   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:11.268725   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.268730   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.268734   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.269258   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:11.269330   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:11.269341   12253 api_server.go:131] duration metric: took 9.279203ms to wait for apiserver health ...
	I0906 12:06:11.269351   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:11.436974   12253 request.go:632] Waited for 167.586901ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437022   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437029   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.437043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.437047   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.441302   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:11.447157   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:11.447183   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447192   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447198   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.447201   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.447204   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.447208   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447211   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.447214   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.447218   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447223   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.447228   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.447232   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.447237   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.447241   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.447244   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.447247   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.447253   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.447258   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.447264   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.447268   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.447270   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.447273   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.447276   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.447294   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.447303   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.447308   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.447313   12253 system_pods.go:74] duration metric: took 177.956833ms to wait for pod list to return data ...
	I0906 12:06:11.447319   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:11.637581   12253 request.go:632] Waited for 190.208152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637651   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637657   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.637664   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.637668   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.650462   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:11.650666   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:11.650678   12253 default_sa.go:55] duration metric: took 203.353142ms for default service account to be created ...
	I0906 12:06:11.650687   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:11.837096   12253 request.go:632] Waited for 186.371823ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837128   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837134   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.837139   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.837143   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.866992   12253 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0906 12:06:11.873145   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:11.873167   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873175   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873181   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.873185   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.873188   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.873195   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873199   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.873202   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.873206   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873211   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.873215   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.873219   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.873223   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.873227   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.873231   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.873233   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.873236   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.873240   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.873244   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.873247   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.873252   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.873256   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.873259   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.873262   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.873265   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.873268   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.873274   12253 system_pods.go:126] duration metric: took 222.581886ms to wait for k8s-apps to be running ...
	I0906 12:06:11.873283   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:11.873340   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:11.886025   12253 system_svc.go:56] duration metric: took 12.733456ms WaitForService to wait for kubelet
	I0906 12:06:11.886050   12253 kubeadm.go:582] duration metric: took 31.714560483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:11.886086   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:12.036232   12253 request.go:632] Waited for 150.073414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036268   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036273   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:12.036286   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:12.036290   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:12.048789   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:12.049838   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049855   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049868   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049873   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049876   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049881   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049884   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049893   12253 node_conditions.go:105] duration metric: took 163.797553ms to run NodePressure ...
	I0906 12:06:12.049902   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:12.049922   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:12.087274   12253 out.go:201] 
	I0906 12:06:12.123635   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:12.123705   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.161370   12253 out.go:177] * Starting "ha-343000-m03" control-plane node in "ha-343000" cluster
	I0906 12:06:12.219408   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:12.219442   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:12.219591   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:12.219605   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:12.219694   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.220349   12253 start.go:360] acquireMachinesLock for ha-343000-m03: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:12.220455   12253 start.go:364] duration metric: took 68.753µs to acquireMachinesLock for "ha-343000-m03"
	I0906 12:06:12.220476   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:12.220482   12253 fix.go:54] fixHost starting: m03
	I0906 12:06:12.220813   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:12.220843   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:12.230327   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56369
	I0906 12:06:12.230794   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:12.231264   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:12.231284   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:12.231543   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:12.231691   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.231816   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:06:12.231923   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.232050   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:06:12.233006   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.233040   12253 fix.go:112] recreateIfNeeded on ha-343000-m03: state=Stopped err=<nil>
	I0906 12:06:12.233052   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	W0906 12:06:12.233162   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:12.271360   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m03" ...
	I0906 12:06:12.312281   12253 main.go:141] libmachine: (ha-343000-m03) Calling .Start
	I0906 12:06:12.312472   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.312588   12253 main.go:141] libmachine: (ha-343000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid
	I0906 12:06:12.314085   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.314111   12253 main.go:141] libmachine: (ha-343000-m03) DBG | pid 10460 is in state "Stopped"
	I0906 12:06:12.314145   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid...
	I0906 12:06:12.314314   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Using UUID 5abf6194-a669-4f35-b6fc-c88bfc629e81
	I0906 12:06:12.392247   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Generated MAC 3e:84:3d:bc:9c:31
	I0906 12:06:12.392279   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:12.392453   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392498   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392570   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5abf6194-a669-4f35-b6fc-c88bfc629e81", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:12.392621   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5abf6194-a669-4f35-b6fc-c88bfc629e81 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:12.392631   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:12.394468   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Pid is 12285
	I0906 12:06:12.395082   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Attempt 0
	I0906 12:06:12.395129   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.395296   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 12285
	I0906 12:06:12.398168   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Searching for 3e:84:3d:bc:9c:31 in /var/db/dhcpd_leases ...
	I0906 12:06:12.398286   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:12.398303   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:12.398316   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:12.398325   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:12.398339   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:06:12.398359   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found match: 3e:84:3d:bc:9c:31
	I0906 12:06:12.398382   12253 main.go:141] libmachine: (ha-343000-m03) DBG | IP: 192.169.0.26
	I0906 12:06:12.398414   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetConfigRaw
	I0906 12:06:12.399172   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:12.399462   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.400029   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:12.400042   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.400184   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:12.400344   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:12.400464   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400591   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400728   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:12.400904   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:12.401165   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:12.401176   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:12.404210   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:12.438119   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:12.439198   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.439227   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.439241   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.439256   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.845267   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:12.845282   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:12.960204   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.960224   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.960244   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.960258   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.961041   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:12.961054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:06:18.729819   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:06:18.729887   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:06:18.729898   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:06:18.753054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:06:23.465534   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:06:23.465548   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465717   12253 buildroot.go:166] provisioning hostname "ha-343000-m03"
	I0906 12:06:23.465726   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465818   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.465902   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.465981   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466055   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466146   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.466265   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.466412   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.466421   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m03 && echo "ha-343000-m03" | sudo tee /etc/hostname
	I0906 12:06:23.536843   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m03
	
	I0906 12:06:23.536860   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.536985   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.537079   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537171   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537236   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.537354   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.537507   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.537525   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:06:23.606665   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:06:23.606681   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:06:23.606695   12253 buildroot.go:174] setting up certificates
	I0906 12:06:23.606700   12253 provision.go:84] configureAuth start
	I0906 12:06:23.606707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.606846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:23.606946   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.607022   12253 provision.go:143] copyHostCerts
	I0906 12:06:23.607051   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607104   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:06:23.607112   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607235   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:06:23.607441   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607476   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:06:23.607482   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607552   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:06:23.607719   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607747   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:06:23.607752   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607836   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:06:23.607981   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m03 san=[127.0.0.1 192.169.0.26 ha-343000-m03 localhost minikube]
	I0906 12:06:23.699873   12253 provision.go:177] copyRemoteCerts
	I0906 12:06:23.699921   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:06:23.699935   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.700077   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.700175   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.700270   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.700376   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:23.737703   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:06:23.737771   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:06:23.757756   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:06:23.757827   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:06:23.777598   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:06:23.777673   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:06:23.797805   12253 provision.go:87] duration metric: took 191.09552ms to configureAuth
	I0906 12:06:23.797818   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:06:23.797988   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:23.798002   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:23.798134   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.798231   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.798314   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798400   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798488   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.798597   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.798724   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.798732   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:06:23.860492   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:06:23.860504   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:06:23.860586   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:06:23.860599   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.860730   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.860807   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.860907   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.861010   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.861140   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.861285   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.861332   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:06:23.935021   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:06:23.935039   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.935186   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.935286   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935371   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935478   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.935609   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.935750   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.935762   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:06:25.580352   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:06:25.580366   12253 machine.go:96] duration metric: took 13.180301802s to provisionDockerMachine
	I0906 12:06:25.580373   12253 start.go:293] postStartSetup for "ha-343000-m03" (driver="hyperkit")
	I0906 12:06:25.580380   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:06:25.580394   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.580572   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:06:25.580585   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.580672   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.580761   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.580846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.580931   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.621691   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:06:25.626059   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:06:25.626069   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:06:25.626156   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:06:25.626292   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:06:25.626299   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:06:25.626479   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:06:25.640080   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:25.666256   12253 start.go:296] duration metric: took 85.87411ms for postStartSetup
	I0906 12:06:25.666279   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.666455   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:06:25.666469   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.666570   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.666655   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.666734   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.666815   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.704275   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:06:25.704337   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:06:25.737458   12253 fix.go:56] duration metric: took 13.516946704s for fixHost
	I0906 12:06:25.737482   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.737626   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.737732   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737832   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737920   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.738049   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:25.738192   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:25.738199   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:06:25.803149   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649585.904544960
	
	I0906 12:06:25.803162   12253 fix.go:216] guest clock: 1725649585.904544960
	I0906 12:06:25.803168   12253 fix.go:229] Guest: 2024-09-06 12:06:25.90454496 -0700 PDT Remote: 2024-09-06 12:06:25.737472 -0700 PDT m=+83.951104505 (delta=167.07296ms)
	I0906 12:06:25.803178   12253 fix.go:200] guest clock delta is within tolerance: 167.07296ms
	I0906 12:06:25.803182   12253 start.go:83] releasing machines lock for "ha-343000-m03", held for 13.582690615s
	I0906 12:06:25.803198   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.803329   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:25.825405   12253 out.go:177] * Found network options:
	I0906 12:06:25.846508   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25
	W0906 12:06:25.867569   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.867608   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.867639   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868819   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:06:25.868894   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	W0906 12:06:25.868907   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.868930   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.869032   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:06:25.869046   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.869089   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869194   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869217   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869337   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869358   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869516   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.869640   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	W0906 12:06:25.904804   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:06:25.904860   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:06:25.953607   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:06:25.953623   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:25.953707   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:25.969069   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:06:25.977320   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:06:25.985732   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:06:25.985790   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:06:25.994169   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.002564   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:06:26.011076   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.019409   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:06:26.027829   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:06:26.036100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:06:26.044789   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:06:26.053382   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:06:26.060878   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:06:26.068234   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.161656   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:06:26.180419   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:26.180540   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:06:26.197783   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.208495   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:06:26.223788   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.234758   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.245879   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:06:26.268201   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.279748   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:26.298675   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:06:26.301728   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:06:26.309959   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:06:26.323781   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:06:26.418935   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:06:26.520404   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:06:26.520429   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:06:26.534785   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.635772   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:06:28.931869   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296074778s)
	I0906 12:06:28.931929   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:06:28.943824   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:06:28.959441   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:28.970674   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:06:29.066042   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:06:29.168956   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.286202   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:06:29.299988   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:29.311495   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.429259   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:06:29.496621   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:06:29.496705   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:06:29.502320   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:06:29.502374   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:06:29.505587   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:06:29.534004   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:06:29.534083   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.551834   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.590600   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:06:29.632268   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:06:29.653333   12253 out.go:177]   - env NO_PROXY=192.169.0.24,192.169.0.25
	I0906 12:06:29.674153   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:29.674373   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:06:29.677525   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:29.687202   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:06:29.687389   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:29.687610   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.687639   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.696472   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56391
	I0906 12:06:29.696894   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.697234   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.697246   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.697502   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.697641   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:06:29.697736   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:29.697809   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:06:29.698794   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:06:29.699046   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.699070   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.707791   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56393
	I0906 12:06:29.708136   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.708457   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.708468   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.708696   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.708812   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:06:29.708911   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.26
	I0906 12:06:29.708917   12253 certs.go:194] generating shared ca certs ...
	I0906 12:06:29.708928   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:06:29.709069   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:06:29.709123   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:06:29.709132   12253 certs.go:256] generating profile certs ...
	I0906 12:06:29.709257   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:06:29.709340   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.e464bc73
	I0906 12:06:29.709394   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:06:29.709401   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:06:29.709422   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:06:29.709447   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:06:29.709465   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:06:29.709482   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:06:29.709510   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:06:29.709528   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:06:29.709550   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:06:29.709623   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:06:29.709661   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:06:29.709669   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:06:29.709702   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:06:29.709732   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:06:29.709766   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:06:29.709833   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:29.709868   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:06:29.709889   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:06:29.709908   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:29.709932   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:06:29.710030   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:06:29.710110   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:06:29.710211   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:06:29.710304   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:06:29.742607   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:06:29.746569   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:06:29.754558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:06:29.757841   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:06:29.765881   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:06:29.769140   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:06:29.778234   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:06:29.781483   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:06:29.789701   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:06:29.792877   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:06:29.801155   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:06:29.804562   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:06:29.812907   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:06:29.833527   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:06:29.854042   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:06:29.874274   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:06:29.894675   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:06:29.914759   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:06:29.935020   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:06:29.955774   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:06:29.976174   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:06:29.996348   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:06:30.016705   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:06:30.036752   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:06:30.050816   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:06:30.064469   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:06:30.078121   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:06:30.092155   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:06:30.106189   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:06:30.120313   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:06:30.134091   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:06:30.138549   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:06:30.147484   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151103   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151157   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.155470   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:06:30.164282   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:06:30.173035   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176736   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176783   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.181161   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:06:30.189862   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:06:30.198669   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202224   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202268   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.206651   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:06:30.215322   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:06:30.218903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:06:30.223374   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:06:30.227903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:06:30.232564   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:06:30.237667   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:06:30.242630   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:06:30.247576   12253 kubeadm.go:934] updating node {m03 192.169.0.26 8443 v1.31.0 docker true true} ...
	I0906 12:06:30.247652   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:06:30.247670   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:06:30.247719   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:06:30.261197   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:06:30.261239   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:06:30.261300   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:06:30.269438   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:06:30.269496   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:06:30.277362   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:06:30.291520   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:06:30.305340   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:06:30.319495   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:06:30.322637   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:30.332577   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.441240   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.456369   12253 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:06:30.456602   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:30.477910   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:06:30.498557   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.628440   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.645947   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:06:30.646165   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:06:30.646208   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:06:30.646371   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.646412   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:30.646417   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.646423   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.646427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649121   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:30.649426   12253 node_ready.go:49] node "ha-343000-m03" has status "Ready":"True"
	I0906 12:06:30.649435   12253 node_ready.go:38] duration metric: took 3.055625ms for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.649441   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:30.649480   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:30.649485   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.649491   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.655093   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:30.660461   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:30.660533   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:30.660539   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.660545   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.660550   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.664427   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:30.664864   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:30.664872   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.664877   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.664880   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.667569   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.161508   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.161522   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.161528   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.161531   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.164411   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.165052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.165061   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.165070   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.165074   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.167897   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.660843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.660861   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.660868   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.660871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.668224   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:31.668938   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.668954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.668969   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.668987   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.674737   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:32.161451   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.161468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.161496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.161501   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.164555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.165061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.165069   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.165075   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.165078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.167689   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.661269   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.661285   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.661294   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.661316   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.664943   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.665460   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.665469   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.665475   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.665479   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.667934   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.668229   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:33.161930   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.161964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.161971   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.161975   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.165689   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.166478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.166488   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.166497   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.166503   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.169565   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.660809   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.660831   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.660841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.660846   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.664137   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.665061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.665071   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.665078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.665099   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.667811   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.161378   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.161391   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.161398   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.161403   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.165094   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:34.165523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.165531   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.165537   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.165540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.167949   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.661206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.661222   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.661228   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.661230   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.663772   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.664499   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.664507   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.664513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.664517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.666543   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.161667   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.161689   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.161700   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.161705   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.166875   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.167311   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.167319   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.167324   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.167328   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.172902   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.173323   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:35.661973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.661988   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.661994   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.661998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.664583   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.664981   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.664989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.664998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.665001   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.667322   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.161747   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.161785   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.161793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.161796   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.164939   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.165450   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.165459   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.165464   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.165474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.167808   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.661492   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.661508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.661532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.661537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.664941   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.665455   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.665464   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.665471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.665474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.668192   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.161660   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.161678   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.161685   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.161688   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.164012   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.164541   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.164549   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.164555   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.164558   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.166577   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.662457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.662494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.662505   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.662511   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.665311   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.666039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.666048   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.666053   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.666056   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.668294   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.668600   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:38.162628   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.162646   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.162654   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.162659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.165660   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.166284   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.166292   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.166298   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.166301   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.168559   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.662170   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.662185   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.662191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.662195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.664733   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.665194   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.665207   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.665211   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.667563   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.161491   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.161508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.161517   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.161522   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.164370   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.164762   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.164770   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.164776   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.164780   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.166614   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:39.661843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.661860   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.661866   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.661871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.664287   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.664950   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.664958   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.664964   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.664968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.667194   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.160891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.160921   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.160933   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.160955   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.165388   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:40.166039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.166047   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.166052   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.166055   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.168212   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.168635   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:40.661892   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.661907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.661914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.661917   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.664471   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.664962   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.664970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.664975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.664984   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.667379   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.160779   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.160797   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.160824   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.160830   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.163878   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:41.164433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.164441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.164446   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.164451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.166991   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.661124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.661138   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.661145   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.661149   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.663595   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.664206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.664214   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.664220   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.664224   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.666219   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:42.161906   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.161926   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.161937   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.161945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.165222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:42.165752   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.165760   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.165765   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.165769   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.167913   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.661255   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.661274   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.661282   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.661288   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.664242   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.664689   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.664697   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.664703   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.664706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.666742   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.667053   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:43.161512   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.161530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.161565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.161575   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.164590   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:43.165234   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.165242   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.165254   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.165258   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.167961   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.660826   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.660844   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.660873   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.660882   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.663557   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.663959   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.663966   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.663972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.663976   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.665816   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.162103   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.162133   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.162158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.162164   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.165060   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.165598   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.165606   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.165612   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.165615   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.167589   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.662307   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.662328   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.662339   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.662344   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.665063   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.665602   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.665610   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.665615   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.665619   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.667607   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.667948   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:45.161277   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.161307   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.161314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.161317   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.163751   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.164201   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.164209   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.164215   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.164217   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.166274   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.662080   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.662099   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.662106   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.662110   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.664692   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.665145   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.665152   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.665158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.665162   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.667158   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.161983   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.162002   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.162011   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.162016   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.165135   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.165638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.165645   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.165650   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.165654   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.167660   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.660973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.661022   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.661036   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.661046   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.664600   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.665041   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.665051   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.665056   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.665061   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.667006   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:47.161827   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.161883   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.161895   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.161902   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.165549   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:47.166029   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.166037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.166041   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.166045   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.168233   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:47.168577   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:47.661554   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.661603   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.661616   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.661625   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.665796   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:47.666259   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.666266   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.666272   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.666276   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.668466   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.161876   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.161891   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.161898   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.161901   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.164419   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.164835   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.164843   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.164849   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.164853   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.166837   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:48.661562   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.661577   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.661598   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.661603   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.663972   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.664457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.664465   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.664470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.664475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.666445   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:49.161410   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.161430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.161438   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.161443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.164478   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:49.164982   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.164989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.164995   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.164998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.167071   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.660698   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.660724   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.660736   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.660742   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.664916   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:49.665349   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.665357   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.665363   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.665367   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.667392   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.667753   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:50.161030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.161065   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.161073   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.161080   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.163537   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.163963   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.163970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.163975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.163979   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.166093   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.661184   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.661238   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.661263   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.661267   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.663637   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.664117   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.664125   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.664131   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.664134   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.666067   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.161515   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.161550   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.161557   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.161561   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.163979   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.164681   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.164690   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.164694   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.164697   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.166790   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.661266   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.661291   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.661374   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.661387   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.664772   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:51.665195   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.665206   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.665216   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.667400   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.667769   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.667779   12253 pod_ready.go:82] duration metric: took 21.007261829s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667785   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667821   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:51.667826   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.667831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.667836   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.669791   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.670205   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.670213   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.670218   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.670221   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.672346   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.672671   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.672679   12253 pod_ready.go:82] duration metric: took 4.889471ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672685   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672718   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:51.672723   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.672729   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.672737   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.674649   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.675030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.675037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.675043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.675046   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.676915   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.677288   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.677297   12253 pod_ready.go:82] duration metric: took 4.607311ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677303   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677339   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:51.677344   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.677349   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.677352   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.679418   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.679897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:51.679907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.679916   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.679920   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.681919   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.682327   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.682336   12253 pod_ready.go:82] duration metric: took 5.028149ms for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682343   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682376   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:51.682381   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.682386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.682389   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.684781   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.685200   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:51.685207   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.685212   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.685215   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.687181   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.687676   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.687685   12253 pod_ready.go:82] duration metric: took 5.337542ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.687696   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.862280   12253 request.go:632] Waited for 174.544275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862360   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862372   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.862382   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.862386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.865455   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.062085   12253 request.go:632] Waited for 196.080428ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.062136   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.062140   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.064928   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.065322   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.065331   12253 pod_ready.go:82] duration metric: took 377.628905ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.065338   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.261393   12253 request.go:632] Waited for 196.009549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261459   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261471   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.261485   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.261492   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.265336   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.461317   12253 request.go:632] Waited for 195.311084ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461362   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.461370   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.461376   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.464202   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.464645   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.464654   12253 pod_ready.go:82] duration metric: took 399.309786ms for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.464661   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.662233   12253 request.go:632] Waited for 197.535092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662290   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662297   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.662305   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.662311   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.665143   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.862031   12253 request.go:632] Waited for 196.411368ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862119   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.862140   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.862145   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.866136   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.866533   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.866543   12253 pod_ready.go:82] duration metric: took 401.876526ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.866550   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.061387   12253 request.go:632] Waited for 194.796135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061453   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061462   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.061470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.061476   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.064293   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:53.261526   12253 request.go:632] Waited for 196.74771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261649   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.261659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.261674   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.265603   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.266028   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.266036   12253 pod_ready.go:82] duration metric: took 399.480241ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.266042   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.461478   12253 request.go:632] Waited for 195.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461556   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461564   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.461571   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.461576   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.464932   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.661907   12253 request.go:632] Waited for 196.48537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661965   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661991   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.661998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.662002   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.665079   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.665555   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.665565   12253 pod_ready.go:82] duration metric: took 399.515968ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.665572   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.861347   12253 request.go:632] Waited for 195.73444ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861414   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861426   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.861434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.861439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.864177   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:54.061465   12253 request.go:632] Waited for 196.861398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061517   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061554   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.061565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.061570   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.064700   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.065020   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.065030   12253 pod_ready.go:82] duration metric: took 399.451485ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.065037   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.263289   12253 request.go:632] Waited for 198.174584ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263384   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263411   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.263436   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.263461   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.266722   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.461554   12253 request.go:632] Waited for 194.387224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461599   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461609   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.461620   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.461627   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.465162   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.465533   12253 pod_ready.go:98] node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465543   12253 pod_ready.go:82] duration metric: took 400.500434ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	E0906 12:06:54.465549   12253 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465555   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.662665   12253 request.go:632] Waited for 197.074891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662731   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662740   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.662749   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.662755   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.665777   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.862800   12253 request.go:632] Waited for 196.680356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862911   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862924   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.862936   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.862945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.866911   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.867361   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.867371   12253 pod_ready.go:82] duration metric: took 401.810264ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.867377   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.062512   12253 request.go:632] Waited for 195.060729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062609   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062629   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.062641   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.062648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.066272   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:55.263362   12253 request.go:632] Waited for 196.717271ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263483   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.263507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.263520   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.268072   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.268453   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.268462   12253 pod_ready.go:82] duration metric: took 401.079128ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.268469   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.462230   12253 request.go:632] Waited for 193.721938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462312   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462320   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.462348   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.462357   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.465173   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:55.662089   12253 request.go:632] Waited for 196.464134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662239   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662255   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.662267   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.662275   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.666427   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.666704   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.666714   12253 pod_ready.go:82] duration metric: took 398.240112ms for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.666721   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.861681   12253 request.go:632] Waited for 194.913797ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861767   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861778   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.861790   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.861799   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.865874   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:56.063343   12253 request.go:632] Waited for 197.091674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063491   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.063501   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.063508   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.067298   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.067689   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.067699   12253 pod_ready.go:82] duration metric: took 400.971333ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.067706   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.261328   12253 request.go:632] Waited for 193.578385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261416   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261431   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.261443   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.261451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.264964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.461367   12253 request.go:632] Waited for 196.051039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.461449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.461454   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.464367   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:56.464786   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.464799   12253 pod_ready.go:82] duration metric: took 397.083037ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.464806   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.662171   12253 request.go:632] Waited for 197.309952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662326   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662340   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.662352   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.662363   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.665960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.862106   12253 request.go:632] Waited for 195.559257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862214   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862225   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.862236   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.862243   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.866072   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.866312   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.866321   12253 pod_ready.go:82] duration metric: took 401.509457ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.866329   12253 pod_ready.go:39] duration metric: took 26.216828833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:56.866341   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:56.866386   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:56.878910   12253 api_server.go:72] duration metric: took 26.422463192s to wait for apiserver process to appear ...
	I0906 12:06:56.878922   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:56.878935   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:56.883745   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:56.883791   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:56.883796   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.883803   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.883808   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.884469   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:56.884556   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:56.884568   12253 api_server.go:131] duration metric: took 5.641059ms to wait for apiserver health ...
	I0906 12:06:56.884573   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:57.061374   12253 request.go:632] Waited for 176.731786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.061480   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.061487   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.066391   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:57.071924   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:57.071938   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.071942   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.071945   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.071948   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.071952   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.071955   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.071958   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.071962   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.071964   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.071967   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.071973   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.071977   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.071979   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.071982   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.071985   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.071988   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.071991   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.071993   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.071996   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.071999   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.072001   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.072004   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.072007   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.072009   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.072012   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.072017   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.072022   12253 system_pods.go:74] duration metric: took 187.444826ms to wait for pod list to return data ...
	I0906 12:06:57.072029   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:57.261398   12253 request.go:632] Waited for 189.325312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261443   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261451   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.261471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.261475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.264018   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:57.264078   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:57.264086   12253 default_sa.go:55] duration metric: took 192.051635ms for default service account to be created ...
	I0906 12:06:57.264103   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:57.461307   12253 request.go:632] Waited for 197.162907ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461342   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461347   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.461367   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.461393   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.466559   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:57.471959   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:57.471969   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.471974   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.471977   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.471981   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.471985   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.471989   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.471992   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.471994   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.471997   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.472000   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.472003   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.472006   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.472009   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.472012   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.472015   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.472017   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.472020   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.472023   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.472026   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.472029   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.472031   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.472034   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.472037   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.472040   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.472043   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.472047   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.472052   12253 system_pods.go:126] duration metric: took 207.94336ms to wait for k8s-apps to be running ...
	I0906 12:06:57.472059   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:57.472107   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:57.483773   12253 system_svc.go:56] duration metric: took 11.709185ms WaitForService to wait for kubelet
	I0906 12:06:57.483792   12253 kubeadm.go:582] duration metric: took 27.027343725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:57.483805   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:57.662348   12253 request.go:632] Waited for 178.494779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662425   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.662448   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.662457   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.665964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:57.666853   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666864   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666872   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666875   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666879   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666882   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666885   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666892   12253 node_conditions.go:105] duration metric: took 183.082589ms to run NodePressure ...
	I0906 12:06:57.666899   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:57.666913   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:57.689595   12253 out.go:201] 
	I0906 12:06:57.710968   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:57.711085   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.733311   12253 out.go:177] * Starting "ha-343000-m04" worker node in "ha-343000" cluster
	I0906 12:06:57.776497   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:57.776531   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:57.776758   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:57.776776   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:57.776887   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.777953   12253 start.go:360] acquireMachinesLock for ha-343000-m04: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:57.778066   12253 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "ha-343000-m04"
	I0906 12:06:57.778091   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:57.778100   12253 fix.go:54] fixHost starting: m04
	I0906 12:06:57.778535   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:57.778560   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:57.788011   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56397
	I0906 12:06:57.788364   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:57.788747   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:57.788763   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:57.789004   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:57.789119   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.789216   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:06:57.789290   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.789388   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:06:57.790320   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:06:57.790346   12253 fix.go:112] recreateIfNeeded on ha-343000-m04: state=Stopped err=<nil>
	I0906 12:06:57.790354   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	W0906 12:06:57.790423   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:57.811236   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m04" ...
	I0906 12:06:57.853317   12253 main.go:141] libmachine: (ha-343000-m04) Calling .Start
	I0906 12:06:57.853695   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.853752   12253 main.go:141] libmachine: (ha-343000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid
	I0906 12:06:57.853833   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Using UUID 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5
	I0906 12:06:57.879995   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Generated MAC 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.880018   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:57.880162   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880191   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880277   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:57.880319   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:57.880330   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:57.881745   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Pid is 12301
	I0906 12:06:57.882213   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Attempt 0
	I0906 12:06:57.882229   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.882285   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 12301
	I0906 12:06:57.884227   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Searching for 6a:d8:ba:fa:e9:e7 in /var/db/dhcpd_leases ...
	I0906 12:06:57.884329   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:57.884344   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:06:57.884361   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:57.884375   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:57.884400   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:57.884406   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetConfigRaw
	I0906 12:06:57.884413   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found match: 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.884464   12253 main.go:141] libmachine: (ha-343000-m04) DBG | IP: 192.169.0.27
	I0906 12:06:57.885084   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:06:57.885308   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.885947   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:57.885958   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.886118   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:06:57.886263   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:06:57.886401   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886518   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886625   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:06:57.886755   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:57.886913   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:06:57.886920   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:57.890225   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:57.898506   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:57.900023   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:57.900046   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:57.900059   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:57.900081   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.292623   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:58.292638   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:58.407402   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:58.407425   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:58.407438   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:58.407462   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.408295   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:58.408305   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:07:04.116677   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:07:04.116760   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:07:04.116771   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:07:04.140349   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:07:32.960229   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:07:32.960245   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960393   12253 buildroot.go:166] provisioning hostname "ha-343000-m04"
	I0906 12:07:32.960404   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960498   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:32.960578   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:32.960651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960733   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960822   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:32.960938   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:32.961089   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:32.961097   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m04 && echo "ha-343000-m04" | sudo tee /etc/hostname
	I0906 12:07:33.029657   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m04
	
	I0906 12:07:33.029671   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.029803   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.029895   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.029994   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.030077   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.030212   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.030354   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.030365   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:07:33.094966   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:07:33.094982   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:07:33.094992   12253 buildroot.go:174] setting up certificates
	I0906 12:07:33.094999   12253 provision.go:84] configureAuth start
	I0906 12:07:33.095005   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:33.095148   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:33.095261   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.095345   12253 provision.go:143] copyHostCerts
	I0906 12:07:33.095383   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095445   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:07:33.095451   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095595   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:07:33.095788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095828   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:07:33.095833   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095913   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:07:33.096069   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096123   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:07:33.096133   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096216   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:07:33.096362   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m04 san=[127.0.0.1 192.169.0.27 ha-343000-m04 localhost minikube]
	I0906 12:07:33.148486   12253 provision.go:177] copyRemoteCerts
	I0906 12:07:33.148536   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:07:33.148551   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.148688   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.148785   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.148886   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.148968   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:33.184847   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:07:33.184925   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:07:33.204793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:07:33.204868   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:07:33.225189   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:07:33.225262   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:07:33.245047   12253 provision.go:87] duration metric: took 150.030083ms to configureAuth
	I0906 12:07:33.245064   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:07:33.245233   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:07:33.245264   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:33.245394   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.245474   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.245563   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245656   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245735   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.245857   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.245998   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.246006   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:07:33.305766   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:07:33.305779   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:07:33.305852   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:07:33.305865   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.305998   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.306097   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306198   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306282   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.306410   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.306555   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.306603   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:07:33.377062   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	Environment=NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:07:33.377081   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.377218   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.377309   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377395   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377470   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.377595   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.377731   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.377745   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:07:34.969419   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:07:34.969435   12253 machine.go:96] duration metric: took 37.07976383s to provisionDockerMachine
	I0906 12:07:34.969443   12253 start.go:293] postStartSetup for "ha-343000-m04" (driver="hyperkit")
	I0906 12:07:34.969451   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:07:34.969464   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:34.969653   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:07:34.969667   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:34.969755   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:34.969839   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:34.969938   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:34.970026   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.005883   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:07:35.009124   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:07:35.009135   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:07:35.009234   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:07:35.009411   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:07:35.009418   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:07:35.009642   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:07:35.017147   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:07:35.037468   12253 start.go:296] duration metric: took 68.014068ms for postStartSetup
	I0906 12:07:35.037488   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.037659   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:07:35.037673   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.037762   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.037851   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.037939   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.038032   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.073675   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:07:35.073738   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:07:35.107246   12253 fix.go:56] duration metric: took 37.325422655s for fixHost
	I0906 12:07:35.107273   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.107423   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.107527   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107605   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107700   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.107824   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:35.107967   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:35.107979   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:07:35.169429   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649655.267789382
	
	I0906 12:07:35.169443   12253 fix.go:216] guest clock: 1725649655.267789382
	I0906 12:07:35.169449   12253 fix.go:229] Guest: 2024-09-06 12:07:35.267789382 -0700 PDT Remote: 2024-09-06 12:07:35.107262 -0700 PDT m=+153.317111189 (delta=160.527382ms)
	I0906 12:07:35.169466   12253 fix.go:200] guest clock delta is within tolerance: 160.527382ms
	I0906 12:07:35.169472   12253 start.go:83] releasing machines lock for "ha-343000-m04", held for 37.387671405s
	I0906 12:07:35.169494   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.169634   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:35.192021   12253 out.go:177] * Found network options:
	I0906 12:07:35.212912   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	W0906 12:07:35.233597   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233618   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233628   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.233643   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234159   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234366   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234455   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:07:35.234491   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	W0906 12:07:35.234542   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234565   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234576   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.234648   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:07:35.234651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234665   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.234826   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234871   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235007   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235056   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235182   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235206   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.235315   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	W0906 12:07:35.268496   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:07:35.268557   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:07:35.318514   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:07:35.318528   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.318592   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.333874   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:07:35.343295   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:07:35.352492   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.352552   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:07:35.361630   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.370668   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:07:35.379741   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.389143   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:07:35.398542   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:07:35.407763   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:07:35.416819   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:07:35.426383   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:07:35.434689   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:07:35.442821   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:35.546285   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:07:35.565383   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.565458   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:07:35.587708   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.599182   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:07:35.618394   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.629619   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.640716   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:07:35.663169   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.673665   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.688883   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:07:35.691747   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:07:35.698972   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:07:35.712809   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:07:35.816741   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:07:35.926943   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.926972   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:07:35.942083   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:36.036699   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:08:37.056745   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01976389s)
	I0906 12:08:37.056810   12253 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:08:37.092348   12253 out.go:201] 
	W0906 12:08:37.113034   12253 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:07:33 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388087675Z" level=info msg="Starting up"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388874857Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.389448447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.406541023Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421511237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421602459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421668995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421705837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421880023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421931200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422075608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422118185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422150327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422179563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422320644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422541368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424094220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424143575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424295349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424338381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424460558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424511586Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425636722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425688205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425727379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425760048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425791193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425860087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426020444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426094135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426129732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426167338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426204356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426237806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426268346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426298666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426328562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426358230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426389211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426418321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426456445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426487889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426516746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426546507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426578999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426618589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426715802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426750125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426780114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426818663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426851076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426879866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426909029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426949139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426988055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427021053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427049769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427133633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427177682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427207151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427236043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427298115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427372740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427431600Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427611432Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427700568Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427760941Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427803687Z" level=info msg="containerd successfully booted in 0.022207s"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.407865115Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.420336385Z" level=info msg="Loading containers: start."
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.515687290Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.987987334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.032534306Z" level=info msg="Loading containers: done."
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.046984897Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.047174717Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066396312Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066609197Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:07:35 ha-343000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.147371084Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:07:36 ha-343000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.149138373Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.151983630Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152081675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152156440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:37 ha-343000-m04 dockerd[1111]: time="2024-09-06T19:07:37.182746438Z" level=info msg="Starting up"
	Sep 06 19:08:37 ha-343000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:08:37.113090   12253 out.go:270] * 
	W0906 12:08:37.114019   12253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:08:37.156019   12253 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203311461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203639509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b7ad89fb08b292cfac509e0c383de126da238700a4e5bad8ad55590054381dba/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e01343203b7a509a71640de600f467038bad7b3d1d628993d32a37ee491ef5d1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f2f69bda625f237b44e2bc9af0e9cfd8b05e944b06149fba0d64a3e513338ba1/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607046115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607111680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607122664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607194485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.645965722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646293720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646498986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.648910956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664089064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664361369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664585443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664903965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:42 ha-343000 dockerd[1148]: time="2024-09-06T19:06:42.976990703Z" level=info msg="ignoring event" container=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977534371Z" level=info msg="shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977730802Z" level=warning msg="cleaning up after shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977773534Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339610101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339689283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339702665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.340050558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c1a60be55b6a1       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   f2f69bda625f2       storage-provisioner
	0e02b4bf2dbaa       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   e01343203b7a5       busybox-7dff88458-x6w7h
	22c131171f901       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   f2f69bda625f2       storage-provisioner
	803c4f073a4fa       ad83b2ca7b09e                                                                                         2 minutes ago        Running             kube-proxy                1                   b7ad89fb08b29       kube-proxy-x6pfk
	554acd0f20e32       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   a2638e4522073       coredns-6f6b679f8f-q4rhs
	c86abdd0a1a3a       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   b2c6d9f178680       kindnet-tj4jx
	d15c1bf38706e       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   9e798ad091c8d       coredns-6f6b679f8f-99jtt
	890baa8f92fc8       045733566833c                                                                                         2 minutes ago        Running             kube-controller-manager   6                   26308c7f15e49       kube-controller-manager-ha-343000
	9ca63a507d338       604f5db92eaa8                                                                                         2 minutes ago        Running             kube-apiserver            6                   70de0991ef26f       kube-apiserver-ha-343000
	5f2ecf46dbad7       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   1804cca78c5d0       kube-vip-ha-343000
	4d2f47c39f165       1766f54c897f0                                                                                         3 minutes ago        Running             kube-scheduler            2                   df0b4d2f0d771       kube-scheduler-ha-343000
	592c214e97d5c       604f5db92eaa8                                                                                         3 minutes ago        Exited              kube-apiserver            5                   70de0991ef26f       kube-apiserver-ha-343000
	8bdc400b3db6d       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      2                   83808e05f091c       etcd-ha-343000
	5cc4eed8c219e       045733566833c                                                                                         3 minutes ago        Exited              kube-controller-manager   5                   26308c7f15e49       kube-controller-manager-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         7 minutes ago        Exited              etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         13 minutes ago       Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [554acd0f20e3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37373 - 8840 "HINFO IN 6495643642992279060.3361092094518909540. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011184519s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[237904971]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.794) (total time: 30004ms):
	Trace[237904971]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:42.797)
	Trace[237904971]: [30.004464183s] [30.004464183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[660143257]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.798) (total time: 30000ms):
	Trace[660143257]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:42.799)
	Trace[660143257]: [30.000893558s] [30.000893558s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[380072670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30007ms):
	Trace[380072670]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.797)
	Trace[380072670]: [30.007427279s] [30.007427279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d15c1bf38706] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54176 - 21158 "HINFO IN 3457232632200313932.3905864345721771129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010437248s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1587501409]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.793) (total time: 30005ms):
	Trace[1587501409]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.798)
	Trace[1587501409]: [30.005577706s] [30.005577706s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[680749614]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30005ms):
	Trace[680749614]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:06:42.798)
	Trace[680749614]: [30.005762488s] [30.005762488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1474873071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.799) (total time: 30001ms):
	Trace[1474873071]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:42.800)
	Trace[1474873071]: [30.001544995s] [30.001544995s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-343000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_55_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:08:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.24
	  Hostname:    ha-343000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6523db55e885482e8ac62c2082b7e4e8
	  System UUID:                36fe47a6-0000-0000-a226-e026237c9096
	  Boot ID:                    a6ec27d4-119e-4645-b472-4cbf4d3b3af4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6w7h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-99jtt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-q4rhs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-343000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-tj4jx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-343000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-343000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-x6pfk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-343000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-343000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 2m26s                  kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-343000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  Starting                 3m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m19s (x8 over 3m19s)  kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x8 over 3m19s)  kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x7 over 3m19s)  kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           2m26s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           2m2s                   node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	
	
	Name:               ha-343000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:08:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.25
	  Hostname:    ha-343000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 01c58e04d4304f6f9c11ce89f0bbf71d
	  System UUID:                2c7446f3-0000-0000-9664-55c72aec5dea
	  Boot ID:                    d9c8abd7-e4ec-46d0-892f-bd1bfa22eaef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jk74s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-343000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-5rtpx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-343000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-343000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zjx8z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-343000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-343000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m29s                  kube-proxy       
	  Normal   Starting                 9m13s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   Starting                 9m17s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 9m17s                  kubelet          Node ha-343000-m02 has been rebooted, boot id: 9a70d273-2199-426f-b35f-a9b4075cc0d7
	  Normal   NodeHasSufficientPID     9m17s                  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m17s                  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m17s                  kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m10s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m59s (x8 over 2m59s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m59s (x8 over 2m59s)  kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m59s (x7 over 2m59s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m47s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           2m26s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           2m2s                   node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	
	
	Name:               ha-343000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_57_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:57:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:08:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.26
	  Hostname:    ha-343000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da881992752a4b679c6a5b2a9f0cdfbb
	  System UUID:                5abf4f35-0000-0000-b6fc-c88bfc629e81
	  Boot ID:                    1683487f-47c5-465d-9b2b-74dea29e28d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2kj2b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-343000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-ksnvp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-343000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-343000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-r285j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-343000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-343000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m5s               kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           9m10s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           2m47s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           2m26s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   Starting                 2m9s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m9s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m9s               kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s               kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s               kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m9s               kubelet          Node ha-343000-m03 has been rebooted, boot id: 1683487f-47c5-465d-9b2b-74dea29e28d4
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	
	
	Name:               ha-343000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_58_13_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:58:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:59:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.27
	  Hostname:    ha-343000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 25099ec69db34e82bcd2f07d22b80010
	  System UUID:                0c454e5f-0000-0000-8b6f-82e9c2aa82c5
	  Boot ID:                    b76c6143-1924-46d7-b754-0208a6d7ff29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rf4h       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-8hww6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-343000-m04 status is now: NodeHasNoDiskPressure
	  Normal  CIDRAssignmentFailed     10m                cidrAllocator    Node ha-343000-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-343000-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m10s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           2m47s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           2m26s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeNotReady             2m7s               node-controller  Node ha-343000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           2m2s               node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036474] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008025] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.716498] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006721] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.833567] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.343017] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.247177] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.103204] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +1.994098] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +0.255819] systemd-fstab-generator[1114]: Ignoring "noauto" option for root device
	[  +0.098656] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.058515] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.064719] systemd-fstab-generator[1140]: Ignoring "noauto" option for root device
	[  +2.463494] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.126800] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.101663] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.133711] systemd-fstab-generator[1394]: Ignoring "noauto" option for root device
	[  +0.457617] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[  +6.844240] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.300680] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 6 19:06] kauditd_printk_skb: 83 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"warn","ts":"2024-09-06T19:04:56.004501Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:04:56.510489Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:04:56.955363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982261Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000937137s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-06T19:04:56.982656Z","caller":"traceutil/trace.go:171","msg":"trace[219101750] range","detail":"{range_begin:; range_end:; }","duration":"7.001140659s","start":"2024-09-06T19:04:49.981500Z","end":"2024-09-06T19:04:56.982641Z","steps":["trace[219101750] 'agreement among raft nodes before linearized reading'  (duration: 7.000934405s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:04:56.982940Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:04:58.256456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839480Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839529Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842271Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-06T19:04:59.555087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	
	
	==> etcd [8bdc400b3db6] <==
	{"level":"warn","ts":"2024-09-06T19:05:52.476788Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:05:52.476858Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:05:57.477883Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:05:57.477875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:06:02.479112Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:02.479232Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:07.479419Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:07.479730Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:12.480370Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:06:12.480493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:06:17.480683Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:17.480759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:22.480793Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:22.480993Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:27.481577Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:06:27.481605Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-06T19:06:32.376901Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.376952Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.377170Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.447537Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"6a6e0aa498652645","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-06T19:06:32.447583Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.448798Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"6a6e0aa498652645","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:06:32.448838Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	
	
	==> kernel <==
	 19:08:39 up 3 min,  0 users,  load average: 0.32, 0.25, 0.10
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c86abdd0a1a3] <==
	I0906 19:08:03.507707       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:13.503045       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:08:13.503070       1 main.go:299] handling current node
	I0906 19:08:13.503086       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:08:13.503092       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:08:13.503329       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:08:13.503422       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:08:13.503756       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:08:13.503798       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:23.506087       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:08:23.506252       1 main.go:299] handling current node
	I0906 19:08:23.506301       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:08:23.506464       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:08:23.506745       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:08:23.506837       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:08:23.506970       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:08:23.507014       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:33.504036       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:08:33.504384       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:08:33.504606       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:08:33.504701       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:33.504818       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:08:33.504906       1 main.go:299] handling current node
	I0906 19:08:33.504954       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:08:33.505036       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [592c214e97d5] <==
	I0906 19:05:27.461896       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:05:27.465176       1 server.go:142] Version: v1.31.0
	I0906 19:05:27.465213       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.107777       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:05:28.107810       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:05:28.107883       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:05:28.108002       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:05:28.108375       1 instance.go:232] Using reconciler: lease
	W0906 19:05:48.100071       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:05:48.101622       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0906 19:05:48.109302       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9ca63a507d33] <==
	I0906 19:06:00.319954       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0906 19:06:00.329227       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0906 19:06:00.389615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:06:00.399153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:06:00.399318       1 policy_source.go:224] refreshing policies
	I0906 19:06:00.418950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:06:00.418975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:06:00.419196       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:06:00.421841       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:06:00.423174       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:06:00.423547       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:06:00.423580       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:06:00.423586       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:06:00.423589       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:06:00.423592       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:06:00.424202       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:06:00.424372       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:06:00.429383       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0906 19:06:00.444807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.25]
	I0906 19:06:00.446706       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:06:00.460452       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0906 19:06:00.463465       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0906 19:06:00.488387       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:06:01.327320       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:06:01.574034       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.24 192.169.0.25]
	
	
	==> kube-controller-manager [5cc4eed8c219] <==
	I0906 19:05:28.174269       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:05:28.573887       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:05:28.573928       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.585160       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:05:28.585380       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:05:28.585888       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:05:28.586027       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:05:49.113760       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-controller-manager [890baa8f92fc] <==
	I0906 19:06:13.983300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.314567ms"
	I0906 19:06:14.017696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.303957ms"
	I0906 19:06:14.018733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.896µs"
	I0906 19:06:14.150501       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:06:14.168151       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:06:14.168284       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:06:30.950379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m03"
	I0906 19:06:31.854707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.582603ms"
	I0906 19:06:31.855323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.326µs"
	I0906 19:06:32.910883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:32.936381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:33.628526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:34.203998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.91662ms"
	I0906 19:06:34.204272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.095µs"
	I0906 19:06:37.967819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:37.977034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:38.064780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:51.654459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.601603ms"
	E0906 19:06:51.654893       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-6f6b679f8f\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-6f6b679f8f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0906 19:06:51.655193       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-l2ztt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-l2ztt\": the object has been modified; please apply your changes to the latest version and try again"
	I0906 19:06:51.655819       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f3f0b4c9-9efd-41cc-93f8-915e2a024362", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-l2ztt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-l2ztt": the object has been modified; please apply your changes to the latest version and try again
	I0906 19:06:51.657515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="153.079µs"
	I0906 19:06:51.663353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="119.395µs"
	I0906 19:06:51.700669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="26.423367ms"
	I0906 19:06:51.700851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.547µs"
	
	
	==> kube-proxy [803c4f073a4f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:06:13.148913       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:06:13.172780       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 19:06:13.173030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:06:13.214090       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:06:13.214133       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:06:13.214154       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:06:13.217530       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:06:13.218331       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:06:13.218361       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:06:13.222797       1 config.go:197] "Starting service config controller"
	I0906 19:06:13.222930       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:06:13.223035       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:06:13.223104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:06:13.225748       1 config.go:326] "Starting node config controller"
	I0906 19:06:13.225874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:06:13.323124       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:06:13.324280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:06:13.326187       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4d2f47c39f16] <==
	W0906 19:05:56.245160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.245533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:56.734981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.735302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:56.742962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.743085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:56.935930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.936032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.301942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.301991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.329279       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.329316       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.449839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.449963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.924069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.924282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.279429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.279584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.391628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.391680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.574460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.574508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.613456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.613730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	I0906 19:06:06.337934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	W0906 19:04:31.417232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.417325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:31.755428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.755742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:35.986154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:35.986279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.066579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.066654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.563029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.563228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.748870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.749078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:45.521553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:45.521675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:47.041120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:47.041443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:52.540182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:52.540432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:54.069445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:54.069585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	E0906 19:04:59.711524       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0906 19:04:59.712006       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0906 19:04:59.712120       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:04:59.712142       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0906 19:04:59.712922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.397388    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e798ad091c8dbac0977ce3e9539e6296e56adde9095535a7a2e9c7ea74d7777"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.403625    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ad89fb08b292cfac509e0c383de126da238700a4e5bad8ad55590054381dba"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.416921    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2638e452207351ccfc0f2f01134ae1987de0b7fc1a7d33f66d0fef46e08a1e1"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.548822    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c6d9f178680726c8e102ac2fb994d4e293ad44539b880ca82a1019b4cbf99a"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.830725    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01343203b7a509a71640de600f467038bad7b3d1d628993d32a37ee491ef5d1"
	Sep 06 19:06:20 ha-343000 kubelet[1561]: E0906 19:06:20.331039    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:06:20 ha-343000 kubelet[1561]: I0906 19:06:20.393885    1561 scope.go:117] "RemoveContainer" containerID="b3713b7090d8f8af511e66546413a97f331dea488be8efe378a26980838f7cf4"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211095    1561 scope.go:117] "RemoveContainer" containerID="051e748db656a81282f4811bb15ed42555514a115306dfa611e2c0d2af72e345"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211309    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: E0906 19:06:43.211390    1561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9815f44c-20e3-4243-8eb4-60cd42a850ad)\"" pod="kube-system/storage-provisioner" podUID="9815f44c-20e3-4243-8eb4-60cd42a850ad"
	Sep 06 19:06:57 ha-343000 kubelet[1561]: I0906 19:06:57.289715    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:07:20 ha-343000 kubelet[1561]: E0906 19:07:20.331091    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:08:20 ha-343000 kubelet[1561]: E0906 19:08:20.333049    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-343000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (219.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-343000" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-343000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-343000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableD
riverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-343000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.24\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"dock
er\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.25\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.26\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.27\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics
-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"Mo
untPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (3.304831112s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	| node    | ha-343000 node delete m03 -v=7                                                                                               | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-343000 stop -v=7                                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT | 06 Sep 24 12:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true                                                                                                     | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:05 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:05:01
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:05:01.821113   12253 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:05:01.821396   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821403   12253 out.go:358] Setting ErrFile to fd 2...
	I0906 12:05:01.821407   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821585   12253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:05:01.822962   12253 out.go:352] Setting JSON to false
	I0906 12:05:01.845482   12253 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11072,"bootTime":1725638429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:05:01.845567   12253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:05:01.867344   12253 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:05:01.909192   12253 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:05:01.909251   12253 notify.go:220] Checking for updates...
	I0906 12:05:01.951681   12253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:01.972896   12253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:05:01.993997   12253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:05:02.014915   12253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:05:02.036376   12253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:05:02.058842   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:02.059362   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.059426   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.069603   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56303
	I0906 12:05:02.069962   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.070394   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.070407   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.070602   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.070721   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.070905   12253 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:05:02.071152   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.071173   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.079785   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56305
	I0906 12:05:02.080100   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.080480   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.080508   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.080753   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.080876   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.109151   12253 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:05:02.151203   12253 start.go:297] selected driver: hyperkit
	I0906 12:05:02.151225   12253 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gv
isor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.151398   12253 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:05:02.151526   12253 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.151681   12253 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:05:02.160708   12253 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:05:02.164397   12253 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.164417   12253 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:05:02.167034   12253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:05:02.167076   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:02.167082   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:02.167157   12253 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.167283   12253 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.209167   12253 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:05:02.230210   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:02.230284   12253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:05:02.230304   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:02.230523   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:02.230539   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:02.230657   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.231246   12253 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:02.231321   12253 start.go:364] duration metric: took 58.855µs to acquireMachinesLock for "ha-343000"
	I0906 12:05:02.231338   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:02.231348   12253 fix.go:54] fixHost starting: 
	I0906 12:05:02.231579   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.231602   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.240199   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56307
	I0906 12:05:02.240538   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.240898   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.240906   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.241115   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.241241   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.241344   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:02.241429   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.241509   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:05:02.242441   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 12107 missing from process table
	I0906 12:05:02.242473   12253 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:05:02.242488   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:05:02.242570   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:02.285299   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:05:02.308252   12253 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:05:02.308536   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.308568   12253 main.go:141] libmachine: (ha-343000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:05:02.308690   12253 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:05:02.418778   12253 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:05:02.418805   12253 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:02.418989   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419036   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419095   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:02.419142   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:02.419160   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:02.420829   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Pid is 12266
	I0906 12:05:02.421178   12253 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:05:02.421194   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.421256   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:02.422249   12253 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:05:02.422316   12253 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:02.422340   12253 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66db525c}
	I0906 12:05:02.422356   12253 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:05:02.422371   12253 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:05:02.422430   12253 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:05:02.423159   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:02.423357   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.423787   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:02.423798   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.423945   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:02.424057   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:02.424240   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424491   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:02.424632   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:02.424882   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:02.424892   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:02.428574   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:02.479264   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:02.479938   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.479953   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.479971   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.479984   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.867700   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:02.867715   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:02.983045   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.983079   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.983090   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.983110   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.983957   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:02.983967   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:08.596032   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:05:08.596072   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:05:08.596081   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:05:08.620302   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:05:13.496727   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:13.496743   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.496887   12253 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:05:13.496898   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.497005   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.497091   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.497190   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497290   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497391   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.497515   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.497658   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.497666   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:05:13.573506   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:05:13.573525   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.573649   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.573744   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573841   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573933   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.574054   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.574199   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.574210   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:13.646449   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:13.646474   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:13.646492   12253 buildroot.go:174] setting up certificates
	I0906 12:05:13.646500   12253 provision.go:84] configureAuth start
	I0906 12:05:13.646506   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.646647   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:13.646742   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.646835   12253 provision.go:143] copyHostCerts
	I0906 12:05:13.646872   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.646964   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:13.646972   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.647092   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:13.647297   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647337   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:13.647342   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647419   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:13.647566   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647604   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:13.647609   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647688   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:13.647833   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:05:13.694032   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:13.694082   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:13.694097   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.694208   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.694294   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.694394   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.694509   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:13.734054   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:13.734119   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:13.754153   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:13.754219   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 12:05:13.773776   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:13.773840   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:05:13.793258   12253 provision.go:87] duration metric: took 146.744964ms to configureAuth
	I0906 12:05:13.793272   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:13.793440   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:13.793455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:13.793596   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.793699   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.793786   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793872   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793955   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.794076   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.794207   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.794215   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:13.860967   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:13.860981   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:13.861068   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:13.861082   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.861205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.861297   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861411   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861521   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.861683   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.861822   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.861868   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:13.937805   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:13.937827   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.937964   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.938080   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938295   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.938419   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.938558   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.938571   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:15.619728   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:15.619742   12253 machine.go:96] duration metric: took 13.195921245s to provisionDockerMachine
	I0906 12:05:15.619754   12253 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:05:15.619762   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:15.619772   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.619950   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:15.619966   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.620058   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.620154   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.620257   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.620337   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.660028   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:15.663309   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:15.663323   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:15.663418   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:15.663631   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:15.663638   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:15.663848   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:15.671393   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:15.691128   12253 start.go:296] duration metric: took 71.364923ms for postStartSetup
	I0906 12:05:15.691156   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.691327   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:15.691341   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.691453   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.691544   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.691628   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.691712   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.732095   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:15.732157   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:15.785220   12253 fix.go:56] duration metric: took 13.553838389s for fixHost
	I0906 12:05:15.785242   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.785373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.785462   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785558   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785650   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.785774   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:15.785926   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:15.785933   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:15.851168   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649515.950195219
	
	I0906 12:05:15.851179   12253 fix.go:216] guest clock: 1725649515.950195219
	I0906 12:05:15.851184   12253 fix.go:229] Guest: 2024-09-06 12:05:15.950195219 -0700 PDT Remote: 2024-09-06 12:05:15.785232 -0700 PDT m=+13.999000936 (delta=164.963219ms)
	I0906 12:05:15.851205   12253 fix.go:200] guest clock delta is within tolerance: 164.963219ms
	I0906 12:05:15.851209   12253 start.go:83] releasing machines lock for "ha-343000", held for 13.619855055s
	I0906 12:05:15.851228   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851359   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:15.851455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851761   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851860   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851943   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:15.851974   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852006   12253 ssh_runner.go:195] Run: cat /version.json
	I0906 12:05:15.852029   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852070   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852126   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852163   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852217   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852273   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852292   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852391   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.852414   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.945582   12253 ssh_runner.go:195] Run: systemctl --version
	I0906 12:05:15.950518   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:05:15.954710   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:15.954750   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:15.972724   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:15.972739   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:15.972842   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:15.997626   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:16.009969   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:16.021002   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.021063   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:16.029939   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.039024   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:16.047772   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.056625   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:16.065543   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:16.074247   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:16.082976   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:16.091738   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:16.099691   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:16.107701   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.207522   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:16.227285   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:16.227363   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:16.242536   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.255682   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:16.272770   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.283410   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.293777   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:16.316221   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.326357   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:16.341265   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:16.344224   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:16.351341   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:16.364686   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:16.462680   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:16.567102   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.567167   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:16.581141   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.682906   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:19.018795   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.33586105s)
	I0906 12:05:19.018863   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:19.029907   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:19.042839   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.053183   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:19.161103   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:19.269627   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.376110   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:19.389292   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.400498   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.508773   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:19.574293   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:19.574369   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:19.578648   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:19.578702   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:19.581725   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:19.611289   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:19.611360   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.628755   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.690349   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:19.690435   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:19.690798   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:19.695532   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.705484   12253 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:05:19.705569   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:19.705619   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.718680   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.718691   12253 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:05:19.718764   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.731988   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.732008   12253 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:05:19.732017   12253 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:05:19.732095   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:19.732160   12253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:05:19.769790   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:19.769810   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:19.769820   12253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:05:19.769836   12253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:05:19.769924   12253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:05:19.769938   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:19.769993   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:19.783021   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:19.783091   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:19.783139   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:19.790731   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:19.790780   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:05:19.798087   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:05:19.811294   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:19.826571   12253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:05:19.840214   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:19.853805   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:19.856803   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.866597   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.969582   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:19.984116   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:05:19.984128   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:19.984139   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:19.984324   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:19.984402   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:19.984413   12253 certs.go:256] generating profile certs ...
	I0906 12:05:19.984529   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:19.984611   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:05:19.984683   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:19.984690   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:19.984715   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:19.984733   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:19.984750   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:19.984767   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:19.984795   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:19.984823   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:19.984846   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:19.984950   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:19.984995   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:19.985004   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:19.985045   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:19.985074   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:19.985102   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:19.985164   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:19.985201   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:19.985223   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:19.985241   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:19.985738   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:20.016977   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:20.040002   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:20.074896   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:20.096785   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:20.117992   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:20.152101   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:20.181980   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:20.249104   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:20.310747   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:20.334377   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:20.354759   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:05:20.368573   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:20.372727   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:20.381943   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385218   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385254   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.389369   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:20.398370   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:20.407468   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410735   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410769   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.414896   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:20.423953   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:20.432893   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436127   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436161   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.440280   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:20.449469   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:20.452834   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:20.457085   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:20.461715   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:20.466070   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:20.470282   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:20.474449   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:20.478690   12253 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:20.478796   12253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:05:20.491888   12253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:05:20.500336   12253 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:05:20.500348   12253 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:05:20.500388   12253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:05:20.508605   12253 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:05:20.508923   12253 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.509004   12253 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:05:20.509222   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.509871   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.510072   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:05:20.510389   12253 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:05:20.510569   12253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:05:20.518433   12253 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:05:20.518445   12253 kubeadm.go:597] duration metric: took 18.093623ms to restartPrimaryControlPlane
	I0906 12:05:20.518450   12253 kubeadm.go:394] duration metric: took 39.76917ms to StartCluster
	I0906 12:05:20.518463   12253 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.518535   12253 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.518965   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.519194   12253 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:20.519207   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:05:20.519217   12253 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:05:20.519329   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.562952   12253 out.go:177] * Enabled addons: 
	I0906 12:05:20.584902   12253 addons.go:510] duration metric: took 65.689522ms for enable addons: enabled=[]
	I0906 12:05:20.584940   12253 start.go:246] waiting for cluster config update ...
	I0906 12:05:20.584973   12253 start.go:255] writing updated cluster config ...
	I0906 12:05:20.608171   12253 out.go:201] 
	I0906 12:05:20.630349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.630488   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.652951   12253 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:05:20.695164   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:20.695203   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:20.695405   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:20.695421   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:20.695517   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.696367   12253 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:20.696454   12253 start.go:364] duration metric: took 67.794µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:05:20.696472   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:20.696479   12253 fix.go:54] fixHost starting: m02
	I0906 12:05:20.696771   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:20.696805   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:20.705845   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56329
	I0906 12:05:20.706183   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:20.706528   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:20.706543   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:20.706761   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:20.706875   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.706980   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:05:20.707064   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.707136   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:05:20.708055   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.708088   12253 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:05:20.708098   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:05:20.708185   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:20.734735   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:05:20.776747   12253 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:05:20.777073   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.777115   12253 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:05:20.778701   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.778717   12253 main.go:141] libmachine: (ha-343000-m02) DBG | pid 12118 is in state "Stopped"
	I0906 12:05:20.778778   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:05:20.779095   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:05:20.806950   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:05:20.806972   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:20.807155   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807233   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807304   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:20.807361   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:20.807374   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:20.808851   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Pid is 12276
	I0906 12:05:20.809435   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:05:20.809451   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.809514   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12276
	I0906 12:05:20.811081   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:05:20.811162   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:20.811181   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:05:20.811209   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca2f2}
	I0906 12:05:20.811220   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:05:20.811238   12253 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:05:20.811245   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:05:20.811904   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:20.812111   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.812569   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:20.812582   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.812711   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:20.812849   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:20.812941   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813131   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:20.813262   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:20.813401   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:20.813411   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:20.817160   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:20.825311   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:20.826263   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:20.826278   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:20.826305   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:20.826316   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.214947   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:21.214961   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:21.329668   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:21.329695   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:21.329711   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:21.329721   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.330549   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:21.330560   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:26.960134   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:05:26.960175   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:05:26.960183   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:05:26.984271   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:05:30.128139   12253 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.25:22: connect: connection refused
	I0906 12:05:33.191918   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:33.191932   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192104   12253 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:05:33.192113   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192203   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.192293   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.192374   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192456   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192573   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.192685   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.192834   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.192848   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:05:33.271080   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:05:33.271107   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.271242   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.271343   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271432   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271517   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.271653   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.271816   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.271828   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:33.340749   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:33.340766   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:33.340776   12253 buildroot.go:174] setting up certificates
	I0906 12:05:33.340781   12253 provision.go:84] configureAuth start
	I0906 12:05:33.340788   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.340917   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:33.341015   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.341102   12253 provision.go:143] copyHostCerts
	I0906 12:05:33.341127   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341183   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:33.341189   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341303   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:33.341481   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341516   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:33.341521   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341626   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:33.341793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341824   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:33.341829   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341902   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:33.342105   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:05:33.430053   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:33.430099   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:33.430112   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.430247   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.430337   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.430424   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.430498   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:33.468786   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:33.468854   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:33.488429   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:33.488502   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:05:33.507788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:33.507853   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:05:33.527149   12253 provision.go:87] duration metric: took 186.359429ms to configureAuth
	I0906 12:05:33.527164   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:33.527349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:33.527363   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:33.527493   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.527581   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.527670   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527752   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527834   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.527941   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.528081   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.528089   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:33.592983   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:33.592995   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:33.593066   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:33.593077   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.593197   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.593303   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593392   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593487   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.593630   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.593775   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.593821   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:33.669226   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:33.669253   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.669404   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.669513   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669628   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669726   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.669876   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.670026   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.670038   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:35.327313   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:35.327328   12253 machine.go:96] duration metric: took 14.51472045s to provisionDockerMachine
	I0906 12:05:35.327335   12253 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:05:35.327345   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:35.327357   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.327550   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:35.327564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.327658   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.327737   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.327824   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.327895   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.374953   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:35.380104   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:35.380118   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:35.380209   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:35.380346   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:35.380353   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:35.380535   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:35.392904   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:35.425316   12253 start.go:296] duration metric: took 97.970334ms for postStartSetup
	I0906 12:05:35.425336   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.425510   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:35.425521   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.425611   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.425700   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.425784   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.425866   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.465210   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:35.465270   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:35.519276   12253 fix.go:56] duration metric: took 14.822763667s for fixHost
	I0906 12:05:35.519322   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.519466   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.519564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519682   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519766   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.519897   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:35.520049   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:35.520058   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:35.586671   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649535.517793561
	
	I0906 12:05:35.586682   12253 fix.go:216] guest clock: 1725649535.517793561
	I0906 12:05:35.586690   12253 fix.go:229] Guest: 2024-09-06 12:05:35.517793561 -0700 PDT Remote: 2024-09-06 12:05:35.519294 -0700 PDT m=+33.733024449 (delta=-1.500439ms)
	I0906 12:05:35.586700   12253 fix.go:200] guest clock delta is within tolerance: -1.500439ms
	I0906 12:05:35.586703   12253 start.go:83] releasing machines lock for "ha-343000-m02", held for 14.890212868s
	I0906 12:05:35.586719   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.586869   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:35.609959   12253 out.go:177] * Found network options:
	I0906 12:05:35.631361   12253 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:05:35.652026   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.652053   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652675   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652820   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652904   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:35.652927   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:05:35.652986   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.653055   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:05:35.653068   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.653078   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653249   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653283   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653371   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653405   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653519   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653550   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.653617   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:05:35.689663   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:35.689725   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:35.741169   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:35.741183   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.741249   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:35.756280   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:35.765285   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:35.774250   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:35.774298   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:35.783141   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.792103   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:35.800998   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.809931   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:35.818930   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:35.828100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:35.837011   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:35.846071   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:35.854051   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:35.862225   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:35.953449   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:35.973036   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.973102   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:35.989701   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.002119   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:36.020969   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.032323   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.043370   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:36.064919   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.076134   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:36.091185   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:36.094041   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:36.101975   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:36.115524   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:36.210477   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:36.307446   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:36.307474   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:36.321506   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:36.425142   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:38.743512   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31834803s)
	I0906 12:05:38.743573   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:38.754689   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:38.767595   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:38.778550   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:38.871803   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:38.967444   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.077912   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:39.091499   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:39.102647   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.199868   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:39.269396   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:39.269473   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:39.274126   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:39.274176   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:39.279526   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:39.307628   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:39.307702   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.324272   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.363496   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:39.384323   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:05:39.405031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:39.405472   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:39.410152   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:39.420507   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:05:39.420684   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:39.420907   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.420932   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.430101   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56352
	I0906 12:05:39.430438   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.430796   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.430812   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.431028   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.431139   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:39.431212   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:39.431285   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:39.432244   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:05:39.432496   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.432518   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.441251   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56354
	I0906 12:05:39.441578   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.441903   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.441918   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.442138   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.442248   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:39.442348   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.25
	I0906 12:05:39.442355   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:39.442365   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:39.442516   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:39.442578   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:39.442588   12253 certs.go:256] generating profile certs ...
	I0906 12:05:39.442681   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:39.442772   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.7390dc12
	I0906 12:05:39.442830   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:39.442838   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:39.442859   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:39.442879   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:39.442896   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:39.442915   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:39.442951   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:39.442970   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:39.442987   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:39.443067   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:39.443106   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:39.443114   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:39.443147   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:39.443183   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:39.443212   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:39.443276   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:39.443310   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.443336   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.443355   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.443381   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:39.443473   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:39.443566   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:39.443662   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:39.443742   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:39.474601   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:05:39.477773   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:05:39.486087   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:05:39.489291   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:05:39.497797   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:05:39.500976   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:05:39.508902   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:05:39.512097   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:05:39.522208   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:05:39.529029   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:05:39.538558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:05:39.541788   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:05:39.551255   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:39.571163   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:39.590818   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:39.610099   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:39.629618   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:39.649203   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:39.668940   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:39.688319   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:39.707568   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:39.727593   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:39.746946   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:39.766191   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:05:39.779761   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:05:39.793389   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:05:39.807028   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:05:39.820798   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:05:39.834428   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:05:39.848169   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:05:39.861939   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:39.866268   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:39.875520   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878895   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878936   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.883242   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:39.892394   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:39.901475   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904880   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904919   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.909164   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:39.918366   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:39.927561   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.930968   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.931005   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.935325   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:39.944442   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:39.947919   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:39.952225   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:39.956510   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:39.960794   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:39.965188   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:39.969546   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:39.973805   12253 kubeadm.go:934] updating node {m02 192.169.0.25 8443 v1.31.0 docker true true} ...
	I0906 12:05:39.973869   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:39.973885   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:39.973920   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:39.987092   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:39.987133   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:39.987182   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:39.995535   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:39.995584   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:05:40.003762   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:05:40.017266   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:40.030719   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:40.044348   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:40.047310   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:40.057546   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.156340   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.171403   12253 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:40.171578   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:40.192574   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:05:40.213457   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.344499   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.359579   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:40.359776   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:05:40.359813   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:05:40.359973   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:05:40.360058   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:40.360063   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:40.360071   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:40.360075   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:47.989850   12253 round_trippers.go:574] Response Status:  in 7629 milliseconds
	I0906 12:05:48.990862   12253 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990895   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:48.990902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:48.990922   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:49.992764   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:49.992860   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:56357->192.169.0.24:8443: read: connection reset by peer
	I0906 12:05:49.992914   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:49.992923   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:49.992931   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:49.992938   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:50.992884   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:50.992985   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:50.992993   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:50.993001   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:50.993007   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:51.994156   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:51.994218   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:51.994272   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:51.994282   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:51.994293   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:51.994300   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:52.994610   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:52.994678   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:52.994684   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:52.994690   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:52.994695   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:53.996452   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:53.996513   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:53.996568   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:53.996577   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:53.996587   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:53.996600   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:54.996281   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:54.996431   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:54.996445   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:54.996456   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:54.996470   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997732   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:55.997791   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:55.997834   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:55.997841   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:55.997848   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997855   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998659   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:56.998737   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:56.998743   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:56.998748   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998753   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:57.998704   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:57.998768   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:57.998824   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:57.998830   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:57.998841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:57.998847   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.234879   12253 round_trippers.go:574] Response Status: 200 OK in 2236 milliseconds
	I0906 12:06:00.235584   12253 node_ready.go:49] node "ha-343000-m02" has status "Ready":"True"
	I0906 12:06:00.235597   12253 node_ready.go:38] duration metric: took 19.875567395s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:06:00.235604   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:00.235643   12253 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:06:00.235653   12253 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:06:00.235696   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:00.235701   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.235707   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.235711   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.262088   12253 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0906 12:06:00.268356   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.268408   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:00.268414   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.268421   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.268427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.271139   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.271625   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.271633   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.271638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.271642   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.273753   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.274136   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.274144   12253 pod_ready.go:82] duration metric: took 5.774893ms for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274150   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274179   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:00.274184   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.274189   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.274192   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.275924   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.276344   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.276351   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.276355   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.276360   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278001   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.278322   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.278329   12253 pod_ready.go:82] duration metric: took 4.174121ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278335   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278363   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:00.278368   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.278373   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278379   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.280145   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.280523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.280530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.280535   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.280540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.282107   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.282477   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.282486   12253 pod_ready.go:82] duration metric: took 4.146745ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282492   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282522   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.282528   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.282534   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.282537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.284223   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.284663   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.284670   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.284676   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.284679   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.286441   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.782726   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.782751   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.782796   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.782807   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.786175   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:00.786692   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.786700   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.786706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.786710   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.788874   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.283655   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.283671   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.283678   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.283683   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.285985   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.286465   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.286473   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.286481   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.286485   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.288565   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.782633   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.782651   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.782659   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.782664   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.785843   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:01.786296   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.786304   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.786309   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.786314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.788345   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.788771   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.788779   12253 pod_ready.go:82] duration metric: took 1.506279407s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788786   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788823   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:01.788828   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.788833   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.788838   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.790798   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:01.791160   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:01.791171   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.791184   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.791187   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.793250   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.793611   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.793620   12253 pod_ready.go:82] duration metric: took 4.828788ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.793631   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.837481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:01.837495   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.837504   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.837509   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.840718   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.037469   12253 request.go:632] Waited for 196.356353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037506   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037512   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.037520   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.037525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.040221   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.040550   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:02.040560   12253 pod_ready.go:82] duration metric: took 246.922589ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.040567   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.237374   12253 request.go:632] Waited for 196.770161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237419   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.237436   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.237442   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.240098   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.437383   12253 request.go:632] Waited for 196.723319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437429   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.437443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.437449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.440277   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.636447   12253 request.go:632] Waited for 94.227022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636509   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636516   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.636524   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.636528   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.640095   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.837639   12253 request.go:632] Waited for 197.104367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837707   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837717   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.837763   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.837788   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.841651   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:03.040768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.040781   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.040789   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.040793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.043403   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:03.236506   12253 request.go:632] Waited for 192.559607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236606   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236618   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.236631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.236637   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.240751   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.540928   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.540954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.540973   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.540980   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.545016   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.637802   12253 request.go:632] Waited for 92.404425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637881   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637890   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.637902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.637910   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.642163   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.041768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.041794   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.041804   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.041813   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.046193   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.047251   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.047260   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.047266   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.047277   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.056137   12253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 12:06:04.056428   12253 pod_ready.go:103] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:04.541406   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.541425   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.541434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.541439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.544224   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:04.544684   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.544691   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.544697   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.544707   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.547090   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.040907   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:05.040922   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.040930   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.040934   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.044733   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.045134   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:05.045143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.045149   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.045152   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.047168   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.047571   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.047581   12253 pod_ready.go:82] duration metric: took 3.007003521s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047587   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047621   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:05.047626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.047631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.047636   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.049432   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:05.236368   12253 request.go:632] Waited for 186.419986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236497   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236514   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.236525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.236532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.239828   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.240204   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.240214   12253 pod_ready.go:82] duration metric: took 192.620801ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.240220   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.435846   12253 request.go:632] Waited for 195.558833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435906   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.435914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.435921   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.438946   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.636650   12253 request.go:632] Waited for 197.107158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636711   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636719   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.636728   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.636733   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.639926   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.640212   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.640221   12253 pod_ready.go:82] duration metric: took 399.995302ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.640232   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.837401   12253 request.go:632] Waited for 197.103806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837486   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.837513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.837523   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.840662   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.035821   12253 request.go:632] Waited for 194.603254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035950   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.035962   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.035968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.039252   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.039561   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.039571   12253 pod_ready.go:82] duration metric: took 399.332528ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.039578   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.236804   12253 request.go:632] Waited for 197.127943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236841   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236849   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.236856   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.236861   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.239571   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.435983   12253 request.go:632] Waited for 195.836904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436083   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436095   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.436107   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.436115   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.440028   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.440297   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.440306   12253 pod_ready.go:82] duration metric: took 400.722778ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.440313   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.635911   12253 request.go:632] Waited for 195.558637ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635989   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635997   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.636005   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.636009   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.638766   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.836563   12253 request.go:632] Waited for 197.42239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836630   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836640   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.836651   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.836656   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.840182   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.840437   12253 pod_ready.go:93] pod "kube-proxy-8hww6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.840446   12253 pod_ready.go:82] duration metric: took 400.127213ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.840453   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.036000   12253 request.go:632] Waited for 195.50345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036078   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.036093   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.036101   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.039960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.237550   12253 request.go:632] Waited for 197.186932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237618   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237627   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.237638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.237645   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.241824   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:07.242186   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.242196   12253 pod_ready.go:82] duration metric: took 401.736827ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.242202   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.437080   12253 request.go:632] Waited for 194.824311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437120   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437127   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.437134   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.437177   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.439746   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:07.636668   12253 request.go:632] Waited for 196.435868ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636764   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636773   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.636784   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.636790   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.640555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.640971   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.640979   12253 pod_ready.go:82] duration metric: took 398.771488ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.640986   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.837782   12253 request.go:632] Waited for 196.72045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837885   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837895   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.837913   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.841222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.037474   12253 request.go:632] Waited for 195.707367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037543   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037551   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.037559   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.037564   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.041008   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.237863   12253 request.go:632] Waited for 96.589125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238009   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238027   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.238039   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.238064   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.241278   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.436102   12253 request.go:632] Waited for 194.439362ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436137   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.436151   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.436183   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.439043   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:08.642356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.642376   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.642388   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.642397   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.645933   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.837859   12253 request.go:632] Waited for 191.363155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837895   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837900   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.837911   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.841081   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.141167   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.141182   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.141191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.141195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.144158   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.235895   12253 request.go:632] Waited for 91.258445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235957   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.235972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.235977   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.239065   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.641494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.641508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.641517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.641521   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.644350   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.644757   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.644765   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.644771   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.644774   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.647091   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.647426   12253 pod_ready.go:103] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:10.141899   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:10.141923   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.141934   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.141941   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.145540   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.145973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.145981   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.145987   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.145989   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.148176   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.148538   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.148547   12253 pod_ready.go:82] duration metric: took 2.507551998s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.148554   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.235772   12253 request.go:632] Waited for 87.183047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235805   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235811   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.235831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.235849   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.238046   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.437551   12253 request.go:632] Waited for 199.151796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437619   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.437643   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.437648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.440639   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.440964   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.440974   12253 pod_ready.go:82] duration metric: took 292.414078ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.440981   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.636354   12253 request.go:632] Waited for 195.279783ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636426   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636437   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.636450   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.636456   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.641024   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:10.836907   12253 request.go:632] Waited for 195.513588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.836991   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.837001   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.837012   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.837020   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.840787   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.841194   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.841203   12253 pod_ready.go:82] duration metric: took 400.216153ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.841209   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.036390   12253 request.go:632] Waited for 195.137597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036488   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.036510   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.036517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.040104   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:11.236464   12253 request.go:632] Waited for 195.741522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.236507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.236513   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.244008   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:11.244389   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:11.244399   12253 pod_ready.go:82] duration metric: took 403.184015ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.244409   12253 pod_ready.go:39] duration metric: took 11.008775818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:11.244428   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:11.244490   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:11.260044   12253 api_server.go:72] duration metric: took 31.088552933s to wait for apiserver process to appear ...
	I0906 12:06:11.260057   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:11.260076   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:11.268665   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:11.268720   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:11.268725   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.268730   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.268734   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.269258   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:11.269330   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:11.269341   12253 api_server.go:131] duration metric: took 9.279203ms to wait for apiserver health ...
	I0906 12:06:11.269351   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:11.436974   12253 request.go:632] Waited for 167.586901ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437022   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437029   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.437043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.437047   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.441302   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:11.447157   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:11.447183   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447192   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447198   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.447201   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.447204   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.447208   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447211   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.447214   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.447218   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447223   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.447228   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.447232   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.447237   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.447241   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.447244   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.447247   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.447253   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.447258   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.447264   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.447268   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.447270   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.447273   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.447276   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.447294   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.447303   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.447308   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.447313   12253 system_pods.go:74] duration metric: took 177.956833ms to wait for pod list to return data ...
	I0906 12:06:11.447319   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:11.637581   12253 request.go:632] Waited for 190.208152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637651   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637657   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.637664   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.637668   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.650462   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:11.650666   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:11.650678   12253 default_sa.go:55] duration metric: took 203.353142ms for default service account to be created ...
	I0906 12:06:11.650687   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:11.837096   12253 request.go:632] Waited for 186.371823ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837128   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837134   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.837139   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.837143   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.866992   12253 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0906 12:06:11.873145   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:11.873167   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873175   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873181   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.873185   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.873188   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.873195   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873199   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.873202   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.873206   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873211   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.873215   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.873219   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.873223   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.873227   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.873231   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.873233   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.873236   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.873240   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.873244   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.873247   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.873252   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.873256   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.873259   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.873262   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.873265   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.873268   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.873274   12253 system_pods.go:126] duration metric: took 222.581886ms to wait for k8s-apps to be running ...
	I0906 12:06:11.873283   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:11.873340   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:11.886025   12253 system_svc.go:56] duration metric: took 12.733456ms WaitForService to wait for kubelet
	I0906 12:06:11.886050   12253 kubeadm.go:582] duration metric: took 31.714560483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:11.886086   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:12.036232   12253 request.go:632] Waited for 150.073414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036268   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036273   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:12.036286   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:12.036290   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:12.048789   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:12.049838   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049855   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049868   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049873   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049876   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049881   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049884   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049893   12253 node_conditions.go:105] duration metric: took 163.797553ms to run NodePressure ...
	I0906 12:06:12.049902   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:12.049922   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:12.087274   12253 out.go:201] 
	I0906 12:06:12.123635   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:12.123705   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.161370   12253 out.go:177] * Starting "ha-343000-m03" control-plane node in "ha-343000" cluster
	I0906 12:06:12.219408   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:12.219442   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:12.219591   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:12.219605   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:12.219694   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.220349   12253 start.go:360] acquireMachinesLock for ha-343000-m03: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:12.220455   12253 start.go:364] duration metric: took 68.753µs to acquireMachinesLock for "ha-343000-m03"
	I0906 12:06:12.220476   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:12.220482   12253 fix.go:54] fixHost starting: m03
	I0906 12:06:12.220813   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:12.220843   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:12.230327   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56369
	I0906 12:06:12.230794   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:12.231264   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:12.231284   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:12.231543   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:12.231691   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.231816   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:06:12.231923   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.232050   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:06:12.233006   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.233040   12253 fix.go:112] recreateIfNeeded on ha-343000-m03: state=Stopped err=<nil>
	I0906 12:06:12.233052   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	W0906 12:06:12.233162   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:12.271360   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m03" ...
	I0906 12:06:12.312281   12253 main.go:141] libmachine: (ha-343000-m03) Calling .Start
	I0906 12:06:12.312472   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.312588   12253 main.go:141] libmachine: (ha-343000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid
	I0906 12:06:12.314085   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.314111   12253 main.go:141] libmachine: (ha-343000-m03) DBG | pid 10460 is in state "Stopped"
	I0906 12:06:12.314145   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid...
	I0906 12:06:12.314314   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Using UUID 5abf6194-a669-4f35-b6fc-c88bfc629e81
	I0906 12:06:12.392247   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Generated MAC 3e:84:3d:bc:9c:31
	I0906 12:06:12.392279   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:12.392453   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392498   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392570   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5abf6194-a669-4f35-b6fc-c88bfc629e81", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:12.392621   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5abf6194-a669-4f35-b6fc-c88bfc629e81 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:12.392631   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:12.394468   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Pid is 12285
	I0906 12:06:12.395082   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Attempt 0
	I0906 12:06:12.395129   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.395296   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 12285
	I0906 12:06:12.398168   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Searching for 3e:84:3d:bc:9c:31 in /var/db/dhcpd_leases ...
	I0906 12:06:12.398286   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:12.398303   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:12.398316   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:12.398325   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:12.398339   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:06:12.398359   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found match: 3e:84:3d:bc:9c:31
	I0906 12:06:12.398382   12253 main.go:141] libmachine: (ha-343000-m03) DBG | IP: 192.169.0.26
	I0906 12:06:12.398414   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetConfigRaw
	I0906 12:06:12.399172   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:12.399462   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.400029   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:12.400042   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.400184   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:12.400344   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:12.400464   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400591   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400728   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:12.400904   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:12.401165   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:12.401176   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:12.404210   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:12.438119   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:12.439198   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.439227   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.439241   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.439256   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.845267   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:12.845282   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:12.960204   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.960224   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.960244   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.960258   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.961041   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:12.961054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:06:18.729819   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:06:18.729887   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:06:18.729898   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:06:18.753054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:06:23.465534   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:06:23.465548   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465717   12253 buildroot.go:166] provisioning hostname "ha-343000-m03"
	I0906 12:06:23.465726   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465818   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.465902   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.465981   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466055   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466146   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.466265   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.466412   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.466421   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m03 && echo "ha-343000-m03" | sudo tee /etc/hostname
	I0906 12:06:23.536843   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m03
	
	I0906 12:06:23.536860   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.536985   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.537079   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537171   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537236   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.537354   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.537507   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.537525   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:06:23.606665   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:06:23.606681   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:06:23.606695   12253 buildroot.go:174] setting up certificates
	I0906 12:06:23.606700   12253 provision.go:84] configureAuth start
	I0906 12:06:23.606707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.606846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:23.606946   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.607022   12253 provision.go:143] copyHostCerts
	I0906 12:06:23.607051   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607104   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:06:23.607112   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607235   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:06:23.607441   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607476   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:06:23.607482   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607552   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:06:23.607719   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607747   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:06:23.607752   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607836   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:06:23.607981   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m03 san=[127.0.0.1 192.169.0.26 ha-343000-m03 localhost minikube]
	I0906 12:06:23.699873   12253 provision.go:177] copyRemoteCerts
	I0906 12:06:23.699921   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:06:23.699935   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.700077   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.700175   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.700270   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.700376   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:23.737703   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:06:23.737771   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:06:23.757756   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:06:23.757827   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:06:23.777598   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:06:23.777673   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:06:23.797805   12253 provision.go:87] duration metric: took 191.09552ms to configureAuth
	I0906 12:06:23.797818   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:06:23.797988   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:23.798002   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:23.798134   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.798231   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.798314   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798400   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798488   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.798597   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.798724   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.798732   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:06:23.860492   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:06:23.860504   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:06:23.860586   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:06:23.860599   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.860730   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.860807   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.860907   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.861010   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.861140   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.861285   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.861332   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:06:23.935021   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:06:23.935039   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.935186   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.935286   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935371   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935478   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.935609   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.935750   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.935762   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:06:25.580352   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:06:25.580366   12253 machine.go:96] duration metric: took 13.180301802s to provisionDockerMachine
	I0906 12:06:25.580373   12253 start.go:293] postStartSetup for "ha-343000-m03" (driver="hyperkit")
	I0906 12:06:25.580380   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:06:25.580394   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.580572   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:06:25.580585   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.580672   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.580761   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.580846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.580931   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.621691   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:06:25.626059   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:06:25.626069   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:06:25.626156   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:06:25.626292   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:06:25.626299   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:06:25.626479   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:06:25.640080   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:25.666256   12253 start.go:296] duration metric: took 85.87411ms for postStartSetup
	I0906 12:06:25.666279   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.666455   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:06:25.666469   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.666570   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.666655   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.666734   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.666815   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.704275   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:06:25.704337   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:06:25.737458   12253 fix.go:56] duration metric: took 13.516946704s for fixHost
	I0906 12:06:25.737482   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.737626   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.737732   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737832   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737920   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.738049   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:25.738192   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:25.738199   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:06:25.803149   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649585.904544960
	
	I0906 12:06:25.803162   12253 fix.go:216] guest clock: 1725649585.904544960
	I0906 12:06:25.803168   12253 fix.go:229] Guest: 2024-09-06 12:06:25.90454496 -0700 PDT Remote: 2024-09-06 12:06:25.737472 -0700 PDT m=+83.951104505 (delta=167.07296ms)
	I0906 12:06:25.803178   12253 fix.go:200] guest clock delta is within tolerance: 167.07296ms
	I0906 12:06:25.803182   12253 start.go:83] releasing machines lock for "ha-343000-m03", held for 13.582690615s
	I0906 12:06:25.803198   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.803329   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:25.825405   12253 out.go:177] * Found network options:
	I0906 12:06:25.846508   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25
	W0906 12:06:25.867569   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.867608   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.867639   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868819   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:06:25.868894   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	W0906 12:06:25.868907   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.868930   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.869032   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:06:25.869046   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.869089   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869194   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869217   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869337   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869358   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869516   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.869640   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	W0906 12:06:25.904804   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:06:25.904860   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:06:25.953607   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:06:25.953623   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:25.953707   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:25.969069   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:06:25.977320   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:06:25.985732   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:06:25.985790   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:06:25.994169   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.002564   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:06:26.011076   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.019409   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:06:26.027829   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:06:26.036100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:06:26.044789   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:06:26.053382   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:06:26.060878   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:06:26.068234   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.161656   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:06:26.180419   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:26.180540   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:06:26.197783   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.208495   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:06:26.223788   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.234758   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.245879   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:06:26.268201   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.279748   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:26.298675   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:06:26.301728   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:06:26.309959   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:06:26.323781   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:06:26.418935   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:06:26.520404   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:06:26.520429   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:06:26.534785   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.635772   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:06:28.931869   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296074778s)
	I0906 12:06:28.931929   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:06:28.943824   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:06:28.959441   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:28.970674   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:06:29.066042   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:06:29.168956   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.286202   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:06:29.299988   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:29.311495   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.429259   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:06:29.496621   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:06:29.496705   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:06:29.502320   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:06:29.502374   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:06:29.505587   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:06:29.534004   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:06:29.534083   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.551834   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.590600   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:06:29.632268   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:06:29.653333   12253 out.go:177]   - env NO_PROXY=192.169.0.24,192.169.0.25
	I0906 12:06:29.674153   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:29.674373   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:06:29.677525   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:29.687202   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:06:29.687389   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:29.687610   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.687639   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.696472   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56391
	I0906 12:06:29.696894   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.697234   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.697246   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.697502   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.697641   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:06:29.697736   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:29.697809   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:06:29.698794   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:06:29.699046   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.699070   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.707791   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56393
	I0906 12:06:29.708136   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.708457   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.708468   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.708696   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.708812   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:06:29.708911   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.26
	I0906 12:06:29.708917   12253 certs.go:194] generating shared ca certs ...
	I0906 12:06:29.708928   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:06:29.709069   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:06:29.709123   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:06:29.709132   12253 certs.go:256] generating profile certs ...
	I0906 12:06:29.709257   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:06:29.709340   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.e464bc73
	I0906 12:06:29.709394   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:06:29.709401   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:06:29.709422   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:06:29.709447   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:06:29.709465   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:06:29.709482   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:06:29.709510   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:06:29.709528   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:06:29.709550   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:06:29.709623   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:06:29.709661   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:06:29.709669   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:06:29.709702   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:06:29.709732   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:06:29.709766   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:06:29.709833   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:29.709868   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:06:29.709889   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:06:29.709908   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:29.709932   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:06:29.710030   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:06:29.710110   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:06:29.710211   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:06:29.710304   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:06:29.742607   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:06:29.746569   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:06:29.754558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:06:29.757841   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:06:29.765881   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:06:29.769140   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:06:29.778234   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:06:29.781483   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:06:29.789701   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:06:29.792877   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:06:29.801155   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:06:29.804562   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:06:29.812907   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:06:29.833527   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:06:29.854042   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:06:29.874274   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:06:29.894675   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:06:29.914759   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:06:29.935020   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:06:29.955774   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:06:29.976174   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:06:29.996348   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:06:30.016705   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:06:30.036752   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:06:30.050816   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:06:30.064469   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:06:30.078121   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:06:30.092155   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:06:30.106189   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:06:30.120313   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:06:30.134091   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:06:30.138549   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:06:30.147484   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151103   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151157   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.155470   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:06:30.164282   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:06:30.173035   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176736   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176783   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.181161   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:06:30.189862   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:06:30.198669   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202224   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202268   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.206651   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:06:30.215322   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:06:30.218903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:06:30.223374   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:06:30.227903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:06:30.232564   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:06:30.237667   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:06:30.242630   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:06:30.247576   12253 kubeadm.go:934] updating node {m03 192.169.0.26 8443 v1.31.0 docker true true} ...
	I0906 12:06:30.247652   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:06:30.247670   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:06:30.247719   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:06:30.261197   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:06:30.261239   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:06:30.261300   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:06:30.269438   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:06:30.269496   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:06:30.277362   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:06:30.291520   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:06:30.305340   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:06:30.319495   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:06:30.322637   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:30.332577   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.441240   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.456369   12253 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:06:30.456602   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:30.477910   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:06:30.498557   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.628440   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.645947   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:06:30.646165   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:06:30.646208   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:06:30.646371   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.646412   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:30.646417   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.646423   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.646427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649121   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:30.649426   12253 node_ready.go:49] node "ha-343000-m03" has status "Ready":"True"
	I0906 12:06:30.649435   12253 node_ready.go:38] duration metric: took 3.055625ms for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.649441   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:30.649480   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:30.649485   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.649491   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.655093   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:30.660461   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:30.660533   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:30.660539   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.660545   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.660550   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.664427   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:30.664864   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:30.664872   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.664877   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.664880   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.667569   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.161508   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.161522   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.161528   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.161531   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.164411   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.165052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.165061   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.165070   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.165074   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.167897   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.660843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.660861   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.660868   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.660871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.668224   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:31.668938   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.668954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.668969   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.668987   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.674737   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:32.161451   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.161468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.161496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.161501   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.164555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.165061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.165069   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.165075   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.165078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.167689   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.661269   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.661285   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.661294   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.661316   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.664943   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.665460   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.665469   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.665475   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.665479   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.667934   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.668229   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:33.161930   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.161964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.161971   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.161975   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.165689   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.166478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.166488   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.166497   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.166503   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.169565   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.660809   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.660831   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.660841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.660846   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.664137   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.665061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.665071   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.665078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.665099   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.667811   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.161378   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.161391   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.161398   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.161403   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.165094   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:34.165523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.165531   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.165537   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.165540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.167949   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.661206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.661222   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.661228   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.661230   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.663772   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.664499   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.664507   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.664513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.664517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.666543   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.161667   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.161689   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.161700   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.161705   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.166875   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.167311   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.167319   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.167324   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.167328   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.172902   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.173323   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:35.661973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.661988   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.661994   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.661998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.664583   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.664981   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.664989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.664998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.665001   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.667322   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.161747   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.161785   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.161793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.161796   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.164939   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.165450   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.165459   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.165464   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.165474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.167808   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.661492   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.661508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.661532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.661537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.664941   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.665455   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.665464   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.665471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.665474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.668192   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.161660   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.161678   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.161685   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.161688   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.164012   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.164541   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.164549   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.164555   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.164558   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.166577   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.662457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.662494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.662505   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.662511   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.665311   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.666039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.666048   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.666053   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.666056   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.668294   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.668600   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:38.162628   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.162646   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.162654   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.162659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.165660   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.166284   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.166292   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.166298   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.166301   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.168559   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.662170   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.662185   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.662191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.662195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.664733   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.665194   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.665207   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.665211   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.667563   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.161491   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.161508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.161517   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.161522   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.164370   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.164762   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.164770   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.164776   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.164780   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.166614   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:39.661843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.661860   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.661866   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.661871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.664287   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.664950   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.664958   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.664964   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.664968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.667194   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.160891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.160921   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.160933   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.160955   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.165388   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:40.166039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.166047   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.166052   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.166055   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.168212   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.168635   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:40.661892   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.661907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.661914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.661917   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.664471   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.664962   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.664970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.664975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.664984   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.667379   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.160779   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.160797   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.160824   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.160830   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.163878   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:41.164433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.164441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.164446   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.164451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.166991   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.661124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.661138   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.661145   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.661149   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.663595   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.664206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.664214   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.664220   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.664224   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.666219   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:42.161906   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.161926   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.161937   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.161945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.165222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:42.165752   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.165760   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.165765   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.165769   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.167913   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.661255   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.661274   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.661282   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.661288   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.664242   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.664689   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.664697   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.664703   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.664706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.666742   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.667053   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:43.161512   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.161530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.161565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.161575   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.164590   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:43.165234   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.165242   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.165254   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.165258   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.167961   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.660826   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.660844   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.660873   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.660882   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.663557   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.663959   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.663966   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.663972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.663976   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.665816   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.162103   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.162133   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.162158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.162164   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.165060   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.165598   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.165606   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.165612   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.165615   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.167589   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.662307   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.662328   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.662339   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.662344   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.665063   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.665602   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.665610   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.665615   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.665619   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.667607   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.667948   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:45.161277   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.161307   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.161314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.161317   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.163751   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.164201   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.164209   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.164215   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.164217   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.166274   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.662080   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.662099   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.662106   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.662110   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.664692   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.665145   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.665152   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.665158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.665162   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.667158   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.161983   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.162002   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.162011   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.162016   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.165135   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.165638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.165645   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.165650   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.165654   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.167660   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.660973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.661022   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.661036   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.661046   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.664600   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.665041   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.665051   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.665056   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.665061   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.667006   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:47.161827   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.161883   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.161895   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.161902   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.165549   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:47.166029   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.166037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.166041   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.166045   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.168233   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:47.168577   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:47.661554   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.661603   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.661616   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.661625   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.665796   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:47.666259   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.666266   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.666272   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.666276   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.668466   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.161876   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.161891   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.161898   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.161901   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.164419   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.164835   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.164843   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.164849   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.164853   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.166837   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:48.661562   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.661577   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.661598   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.661603   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.663972   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.664457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.664465   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.664470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.664475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.666445   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:49.161410   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.161430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.161438   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.161443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.164478   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:49.164982   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.164989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.164995   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.164998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.167071   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.660698   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.660724   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.660736   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.660742   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.664916   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:49.665349   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.665357   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.665363   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.665367   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.667392   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.667753   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:50.161030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.161065   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.161073   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.161080   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.163537   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.163963   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.163970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.163975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.163979   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.166093   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.661184   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.661238   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.661263   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.661267   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.663637   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.664117   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.664125   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.664131   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.664134   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.666067   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.161515   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.161550   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.161557   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.161561   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.163979   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.164681   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.164690   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.164694   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.164697   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.166790   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.661266   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.661291   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.661374   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.661387   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.664772   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:51.665195   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.665206   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.665216   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.667400   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.667769   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.667779   12253 pod_ready.go:82] duration metric: took 21.007261829s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667785   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667821   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:51.667826   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.667831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.667836   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.669791   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.670205   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.670213   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.670218   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.670221   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.672346   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.672671   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.672679   12253 pod_ready.go:82] duration metric: took 4.889471ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672685   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672718   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:51.672723   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.672729   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.672737   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.674649   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.675030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.675037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.675043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.675046   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.676915   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.677288   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.677297   12253 pod_ready.go:82] duration metric: took 4.607311ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677303   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677339   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:51.677344   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.677349   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.677352   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.679418   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.679897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:51.679907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.679916   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.679920   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.681919   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.682327   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.682336   12253 pod_ready.go:82] duration metric: took 5.028149ms for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682343   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682376   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:51.682381   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.682386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.682389   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.684781   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.685200   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:51.685207   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.685212   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.685215   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.687181   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.687676   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.687685   12253 pod_ready.go:82] duration metric: took 5.337542ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.687696   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.862280   12253 request.go:632] Waited for 174.544275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862360   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862372   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.862382   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.862386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.865455   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.062085   12253 request.go:632] Waited for 196.080428ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.062136   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.062140   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.064928   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.065322   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.065331   12253 pod_ready.go:82] duration metric: took 377.628905ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.065338   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.261393   12253 request.go:632] Waited for 196.009549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261459   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261471   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.261485   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.261492   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.265336   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.461317   12253 request.go:632] Waited for 195.311084ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461362   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.461370   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.461376   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.464202   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.464645   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.464654   12253 pod_ready.go:82] duration metric: took 399.309786ms for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.464661   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.662233   12253 request.go:632] Waited for 197.535092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662290   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662297   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.662305   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.662311   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.665143   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.862031   12253 request.go:632] Waited for 196.411368ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862119   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.862140   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.862145   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.866136   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.866533   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.866543   12253 pod_ready.go:82] duration metric: took 401.876526ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.866550   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.061387   12253 request.go:632] Waited for 194.796135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061453   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061462   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.061470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.061476   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.064293   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:53.261526   12253 request.go:632] Waited for 196.74771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261649   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.261659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.261674   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.265603   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.266028   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.266036   12253 pod_ready.go:82] duration metric: took 399.480241ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.266042   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.461478   12253 request.go:632] Waited for 195.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461556   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461564   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.461571   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.461576   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.464932   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.661907   12253 request.go:632] Waited for 196.48537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661965   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661991   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.661998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.662002   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.665079   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.665555   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.665565   12253 pod_ready.go:82] duration metric: took 399.515968ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.665572   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.861347   12253 request.go:632] Waited for 195.73444ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861414   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861426   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.861434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.861439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.864177   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:54.061465   12253 request.go:632] Waited for 196.861398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061517   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061554   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.061565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.061570   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.064700   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.065020   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.065030   12253 pod_ready.go:82] duration metric: took 399.451485ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.065037   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.263289   12253 request.go:632] Waited for 198.174584ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263384   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263411   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.263436   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.263461   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.266722   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.461554   12253 request.go:632] Waited for 194.387224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461599   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461609   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.461620   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.461627   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.465162   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.465533   12253 pod_ready.go:98] node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465543   12253 pod_ready.go:82] duration metric: took 400.500434ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	E0906 12:06:54.465549   12253 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465555   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.662665   12253 request.go:632] Waited for 197.074891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662731   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662740   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.662749   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.662755   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.665777   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.862800   12253 request.go:632] Waited for 196.680356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862911   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862924   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.862936   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.862945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.866911   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.867361   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.867371   12253 pod_ready.go:82] duration metric: took 401.810264ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.867377   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.062512   12253 request.go:632] Waited for 195.060729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062609   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062629   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.062641   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.062648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.066272   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:55.263362   12253 request.go:632] Waited for 196.717271ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263483   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.263507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.263520   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.268072   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.268453   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.268462   12253 pod_ready.go:82] duration metric: took 401.079128ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.268469   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.462230   12253 request.go:632] Waited for 193.721938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462312   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462320   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.462348   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.462357   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.465173   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:55.662089   12253 request.go:632] Waited for 196.464134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662239   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662255   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.662267   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.662275   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.666427   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.666704   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.666714   12253 pod_ready.go:82] duration metric: took 398.240112ms for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.666721   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.861681   12253 request.go:632] Waited for 194.913797ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861767   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861778   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.861790   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.861799   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.865874   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:56.063343   12253 request.go:632] Waited for 197.091674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063491   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.063501   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.063508   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.067298   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.067689   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.067699   12253 pod_ready.go:82] duration metric: took 400.971333ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.067706   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.261328   12253 request.go:632] Waited for 193.578385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261416   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261431   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.261443   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.261451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.264964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.461367   12253 request.go:632] Waited for 196.051039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.461449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.461454   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.464367   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:56.464786   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.464799   12253 pod_ready.go:82] duration metric: took 397.083037ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.464806   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.662171   12253 request.go:632] Waited for 197.309952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662326   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662340   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.662352   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.662363   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.665960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.862106   12253 request.go:632] Waited for 195.559257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862214   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862225   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.862236   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.862243   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.866072   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.866312   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.866321   12253 pod_ready.go:82] duration metric: took 401.509457ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.866329   12253 pod_ready.go:39] duration metric: took 26.216828833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:56.866341   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:56.866386   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:56.878910   12253 api_server.go:72] duration metric: took 26.422463192s to wait for apiserver process to appear ...
	I0906 12:06:56.878922   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:56.878935   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:56.883745   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:56.883791   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:56.883796   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.883803   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.883808   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.884469   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:56.884556   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:56.884568   12253 api_server.go:131] duration metric: took 5.641059ms to wait for apiserver health ...
	I0906 12:06:56.884573   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:57.061374   12253 request.go:632] Waited for 176.731786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.061480   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.061487   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.066391   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:57.071924   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:57.071938   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.071942   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.071945   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.071948   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.071952   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.071955   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.071958   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.071962   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.071964   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.071967   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.071973   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.071977   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.071979   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.071982   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.071985   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.071988   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.071991   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.071993   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.071996   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.071999   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.072001   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.072004   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.072007   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.072009   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.072012   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.072017   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.072022   12253 system_pods.go:74] duration metric: took 187.444826ms to wait for pod list to return data ...
	I0906 12:06:57.072029   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:57.261398   12253 request.go:632] Waited for 189.325312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261443   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261451   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.261471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.261475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.264018   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:57.264078   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:57.264086   12253 default_sa.go:55] duration metric: took 192.051635ms for default service account to be created ...
	I0906 12:06:57.264103   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:57.461307   12253 request.go:632] Waited for 197.162907ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461342   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461347   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.461367   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.461393   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.466559   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:57.471959   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:57.471969   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.471974   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.471977   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.471981   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.471985   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.471989   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.471992   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.471994   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.471997   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.472000   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.472003   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.472006   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.472009   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.472012   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.472015   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.472017   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.472020   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.472023   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.472026   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.472029   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.472031   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.472034   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.472037   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.472040   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.472043   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.472047   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.472052   12253 system_pods.go:126] duration metric: took 207.94336ms to wait for k8s-apps to be running ...
	I0906 12:06:57.472059   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:57.472107   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:57.483773   12253 system_svc.go:56] duration metric: took 11.709185ms WaitForService to wait for kubelet
	I0906 12:06:57.483792   12253 kubeadm.go:582] duration metric: took 27.027343725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:57.483805   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:57.662348   12253 request.go:632] Waited for 178.494779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662425   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.662448   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.662457   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.665964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:57.666853   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666864   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666872   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666875   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666879   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666882   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666885   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666892   12253 node_conditions.go:105] duration metric: took 183.082589ms to run NodePressure ...
	I0906 12:06:57.666899   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:57.666913   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:57.689595   12253 out.go:201] 
	I0906 12:06:57.710968   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:57.711085   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.733311   12253 out.go:177] * Starting "ha-343000-m04" worker node in "ha-343000" cluster
	I0906 12:06:57.776497   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:57.776531   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:57.776758   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:57.776776   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:57.776887   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.777953   12253 start.go:360] acquireMachinesLock for ha-343000-m04: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:57.778066   12253 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "ha-343000-m04"
	I0906 12:06:57.778091   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:57.778100   12253 fix.go:54] fixHost starting: m04
	I0906 12:06:57.778535   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:57.778560   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:57.788011   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56397
	I0906 12:06:57.788364   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:57.788747   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:57.788763   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:57.789004   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:57.789119   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.789216   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:06:57.789290   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.789388   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:06:57.790320   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:06:57.790346   12253 fix.go:112] recreateIfNeeded on ha-343000-m04: state=Stopped err=<nil>
	I0906 12:06:57.790354   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	W0906 12:06:57.790423   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:57.811236   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m04" ...
	I0906 12:06:57.853317   12253 main.go:141] libmachine: (ha-343000-m04) Calling .Start
	I0906 12:06:57.853695   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.853752   12253 main.go:141] libmachine: (ha-343000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid
	I0906 12:06:57.853833   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Using UUID 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5
	I0906 12:06:57.879995   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Generated MAC 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.880018   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:57.880162   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880191   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880277   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:57.880319   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:57.880330   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:57.881745   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Pid is 12301
	I0906 12:06:57.882213   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Attempt 0
	I0906 12:06:57.882229   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.882285   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 12301
	I0906 12:06:57.884227   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Searching for 6a:d8:ba:fa:e9:e7 in /var/db/dhcpd_leases ...
	I0906 12:06:57.884329   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:57.884344   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:06:57.884361   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:57.884375   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:57.884400   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:57.884406   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetConfigRaw
	I0906 12:06:57.884413   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found match: 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.884464   12253 main.go:141] libmachine: (ha-343000-m04) DBG | IP: 192.169.0.27
	I0906 12:06:57.885084   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:06:57.885308   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.885947   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:57.885958   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.886118   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:06:57.886263   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:06:57.886401   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886518   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886625   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:06:57.886755   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:57.886913   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:06:57.886920   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:57.890225   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:57.898506   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:57.900023   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:57.900046   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:57.900059   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:57.900081   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.292623   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:58.292638   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:58.407402   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:58.407425   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:58.407438   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:58.407462   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.408295   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:58.408305   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:07:04.116677   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:07:04.116760   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:07:04.116771   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:07:04.140349   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:07:32.960229   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:07:32.960245   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960393   12253 buildroot.go:166] provisioning hostname "ha-343000-m04"
	I0906 12:07:32.960404   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960498   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:32.960578   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:32.960651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960733   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960822   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:32.960938   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:32.961089   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:32.961097   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m04 && echo "ha-343000-m04" | sudo tee /etc/hostname
	I0906 12:07:33.029657   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m04
	
	I0906 12:07:33.029671   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.029803   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.029895   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.029994   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.030077   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.030212   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.030354   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.030365   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:07:33.094966   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:07:33.094982   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:07:33.094992   12253 buildroot.go:174] setting up certificates
	I0906 12:07:33.094999   12253 provision.go:84] configureAuth start
	I0906 12:07:33.095005   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:33.095148   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:33.095261   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.095345   12253 provision.go:143] copyHostCerts
	I0906 12:07:33.095383   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095445   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:07:33.095451   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095595   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:07:33.095788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095828   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:07:33.095833   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095913   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:07:33.096069   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096123   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:07:33.096133   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096216   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:07:33.096362   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m04 san=[127.0.0.1 192.169.0.27 ha-343000-m04 localhost minikube]
	I0906 12:07:33.148486   12253 provision.go:177] copyRemoteCerts
	I0906 12:07:33.148536   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:07:33.148551   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.148688   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.148785   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.148886   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.148968   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:33.184847   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:07:33.184925   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:07:33.204793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:07:33.204868   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:07:33.225189   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:07:33.225262   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:07:33.245047   12253 provision.go:87] duration metric: took 150.030083ms to configureAuth
	I0906 12:07:33.245064   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:07:33.245233   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:07:33.245264   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:33.245394   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.245474   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.245563   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245656   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245735   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.245857   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.245998   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.246006   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:07:33.305766   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:07:33.305779   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:07:33.305852   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:07:33.305865   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.305998   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.306097   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306198   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306282   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.306410   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.306555   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.306603   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:07:33.377062   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	Environment=NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:07:33.377081   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.377218   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.377309   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377395   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377470   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.377595   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.377731   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.377745   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:07:34.969419   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:07:34.969435   12253 machine.go:96] duration metric: took 37.07976383s to provisionDockerMachine
	I0906 12:07:34.969443   12253 start.go:293] postStartSetup for "ha-343000-m04" (driver="hyperkit")
	I0906 12:07:34.969451   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:07:34.969464   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:34.969653   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:07:34.969667   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:34.969755   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:34.969839   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:34.969938   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:34.970026   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.005883   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:07:35.009124   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:07:35.009135   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:07:35.009234   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:07:35.009411   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:07:35.009418   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:07:35.009642   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:07:35.017147   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:07:35.037468   12253 start.go:296] duration metric: took 68.014068ms for postStartSetup
	I0906 12:07:35.037488   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.037659   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:07:35.037673   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.037762   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.037851   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.037939   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.038032   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.073675   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:07:35.073738   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:07:35.107246   12253 fix.go:56] duration metric: took 37.325422655s for fixHost
	I0906 12:07:35.107273   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.107423   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.107527   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107605   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107700   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.107824   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:35.107967   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:35.107979   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:07:35.169429   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649655.267789382
	
	I0906 12:07:35.169443   12253 fix.go:216] guest clock: 1725649655.267789382
	I0906 12:07:35.169449   12253 fix.go:229] Guest: 2024-09-06 12:07:35.267789382 -0700 PDT Remote: 2024-09-06 12:07:35.107262 -0700 PDT m=+153.317111189 (delta=160.527382ms)
	I0906 12:07:35.169466   12253 fix.go:200] guest clock delta is within tolerance: 160.527382ms
	I0906 12:07:35.169472   12253 start.go:83] releasing machines lock for "ha-343000-m04", held for 37.387671405s
	I0906 12:07:35.169494   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.169634   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:35.192021   12253 out.go:177] * Found network options:
	I0906 12:07:35.212912   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	W0906 12:07:35.233597   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233618   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233628   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.233643   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234159   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234366   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234455   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:07:35.234491   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	W0906 12:07:35.234542   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234565   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234576   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.234648   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:07:35.234651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234665   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.234826   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234871   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235007   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235056   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235182   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235206   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.235315   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	W0906 12:07:35.268496   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:07:35.268557   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:07:35.318514   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:07:35.318528   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.318592   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.333874   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:07:35.343295   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:07:35.352492   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.352552   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:07:35.361630   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.370668   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:07:35.379741   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.389143   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:07:35.398542   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:07:35.407763   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:07:35.416819   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:07:35.426383   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:07:35.434689   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:07:35.442821   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:35.546285   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:07:35.565383   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.565458   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:07:35.587708   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.599182   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:07:35.618394   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.629619   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.640716   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:07:35.663169   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.673665   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.688883   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:07:35.691747   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:07:35.698972   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:07:35.712809   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:07:35.816741   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:07:35.926943   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.926972   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:07:35.942083   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:36.036699   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:08:37.056745   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01976389s)
	I0906 12:08:37.056810   12253 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:08:37.092348   12253 out.go:201] 
	W0906 12:08:37.113034   12253 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:07:33 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388087675Z" level=info msg="Starting up"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388874857Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.389448447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.406541023Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421511237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421602459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421668995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421705837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421880023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421931200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422075608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422118185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422150327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422179563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422320644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422541368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424094220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424143575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424295349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424338381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424460558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424511586Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425636722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425688205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425727379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425760048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425791193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425860087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426020444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426094135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426129732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426167338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426204356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426237806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426268346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426298666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426328562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426358230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426389211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426418321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426456445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426487889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426516746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426546507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426578999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426618589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426715802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426750125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426780114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426818663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426851076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426879866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426909029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426949139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426988055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427021053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427049769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427133633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427177682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427207151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427236043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427298115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427372740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427431600Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427611432Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427700568Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427760941Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427803687Z" level=info msg="containerd successfully booted in 0.022207s"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.407865115Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.420336385Z" level=info msg="Loading containers: start."
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.515687290Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.987987334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.032534306Z" level=info msg="Loading containers: done."
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.046984897Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.047174717Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066396312Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066609197Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:07:35 ha-343000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.147371084Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:07:36 ha-343000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.149138373Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.151983630Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152081675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152156440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:37 ha-343000-m04 dockerd[1111]: time="2024-09-06T19:07:37.182746438Z" level=info msg="Starting up"
	Sep 06 19:08:37 ha-343000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:08:37.113090   12253 out.go:270] * 
	W0906 12:08:37.114019   12253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:08:37.156019   12253 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203311461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203639509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b7ad89fb08b292cfac509e0c383de126da238700a4e5bad8ad55590054381dba/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e01343203b7a509a71640de600f467038bad7b3d1d628993d32a37ee491ef5d1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f2f69bda625f237b44e2bc9af0e9cfd8b05e944b06149fba0d64a3e513338ba1/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607046115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607111680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607122664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607194485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.645965722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646293720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646498986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.648910956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664089064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664361369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664585443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664903965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:42 ha-343000 dockerd[1148]: time="2024-09-06T19:06:42.976990703Z" level=info msg="ignoring event" container=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977534371Z" level=info msg="shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977730802Z" level=warning msg="cleaning up after shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977773534Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339610101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339689283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339702665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.340050558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c1a60be55b6a1       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   f2f69bda625f2       storage-provisioner
	0e02b4bf2dbaa       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   e01343203b7a5       busybox-7dff88458-x6w7h
	22c131171f901       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   f2f69bda625f2       storage-provisioner
	803c4f073a4fa       ad83b2ca7b09e                                                                                         2 minutes ago        Running             kube-proxy                1                   b7ad89fb08b29       kube-proxy-x6pfk
	554acd0f20e32       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   a2638e4522073       coredns-6f6b679f8f-q4rhs
	c86abdd0a1a3a       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   b2c6d9f178680       kindnet-tj4jx
	d15c1bf38706e       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   9e798ad091c8d       coredns-6f6b679f8f-99jtt
	890baa8f92fc8       045733566833c                                                                                         2 minutes ago        Running             kube-controller-manager   6                   26308c7f15e49       kube-controller-manager-ha-343000
	9ca63a507d338       604f5db92eaa8                                                                                         2 minutes ago        Running             kube-apiserver            6                   70de0991ef26f       kube-apiserver-ha-343000
	5f2ecf46dbad7       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   1804cca78c5d0       kube-vip-ha-343000
	4d2f47c39f165       1766f54c897f0                                                                                         3 minutes ago        Running             kube-scheduler            2                   df0b4d2f0d771       kube-scheduler-ha-343000
	592c214e97d5c       604f5db92eaa8                                                                                         3 minutes ago        Exited              kube-apiserver            5                   70de0991ef26f       kube-apiserver-ha-343000
	8bdc400b3db6d       2e96e5913fc06                                                                                         3 minutes ago        Running             etcd                      2                   83808e05f091c       etcd-ha-343000
	5cc4eed8c219e       045733566833c                                                                                         3 minutes ago        Exited              kube-controller-manager   5                   26308c7f15e49       kube-controller-manager-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         7 minutes ago        Exited              kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         7 minutes ago        Exited              etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 minutes ago       Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         13 minutes ago       Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              13 minutes ago       Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         13 minutes ago       Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [554acd0f20e3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37373 - 8840 "HINFO IN 6495643642992279060.3361092094518909540. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011184519s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[237904971]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.794) (total time: 30004ms):
	Trace[237904971]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:42.797)
	Trace[237904971]: [30.004464183s] [30.004464183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[660143257]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.798) (total time: 30000ms):
	Trace[660143257]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:42.799)
	Trace[660143257]: [30.000893558s] [30.000893558s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[380072670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30007ms):
	Trace[380072670]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.797)
	Trace[380072670]: [30.007427279s] [30.007427279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d15c1bf38706] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54176 - 21158 "HINFO IN 3457232632200313932.3905864345721771129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010437248s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1587501409]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.793) (total time: 30005ms):
	Trace[1587501409]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.798)
	Trace[1587501409]: [30.005577706s] [30.005577706s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[680749614]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30005ms):
	Trace[680749614]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:06:42.798)
	Trace[680749614]: [30.005762488s] [30.005762488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1474873071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.799) (total time: 30001ms):
	Trace[1474873071]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:42.800)
	Trace[1474873071]: [30.001544995s] [30.001544995s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-343000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_55_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:08:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.24
	  Hostname:    ha-343000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6523db55e885482e8ac62c2082b7e4e8
	  System UUID:                36fe47a6-0000-0000-a226-e026237c9096
	  Boot ID:                    a6ec27d4-119e-4645-b472-4cbf4d3b3af4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6w7h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-99jtt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-q4rhs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-343000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-tj4jx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-343000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-343000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-x6pfk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-343000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-343000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     13m                    kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    13m                    kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  13m                    kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  NodeReady                13m                    kubelet          Node ha-343000 status is now: NodeReady
	  Normal  RegisteredNode           12m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           9m14s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  Starting                 3m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m23s (x8 over 3m23s)  kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x8 over 3m23s)  kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x7 over 3m23s)  kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           2m30s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	
	
	Name:               ha-343000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:08:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.25
	  Hostname:    ha-343000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 01c58e04d4304f6f9c11ce89f0bbf71d
	  System UUID:                2c7446f3-0000-0000-9664-55c72aec5dea
	  Boot ID:                    d9c8abd7-e4ec-46d0-892f-bd1bfa22eaef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jk74s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-343000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-5rtpx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-343000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-343000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-zjx8z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-343000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-343000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m33s                kube-proxy       
	  Normal   Starting                 9m17s                kube-proxy       
	  Normal   Starting                 12m                  kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           12m                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   Starting                 9m21s                kubelet          Starting kubelet.
	  Warning  Rebooted                 9m21s                kubelet          Node ha-343000-m02 has been rebooted, boot id: 9a70d273-2199-426f-b35f-a9b4075cc0d7
	  Normal   NodeHasSufficientPID     9m21s                kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m21s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m21s                kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m21s                kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m14s                node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   Starting                 3m3s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m3s (x8 over 3m3s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m3s (x8 over 3m3s)  kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m3s (x7 over 3m3s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m51s                node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           2m30s                node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           2m6s                 node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	
	
	Name:               ha-343000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_57_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:57:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:08:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.26
	  Hostname:    ha-343000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da881992752a4b679c6a5b2a9f0cdfbb
	  System UUID:                5abf4f35-0000-0000-b6fc-c88bfc629e81
	  Boot ID:                    1683487f-47c5-465d-9b2b-74dea29e28d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2kj2b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-343000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-ksnvp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-343000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-343000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-r285j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-343000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-343000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m9s               kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           9m14s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           2m51s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           2m30s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   Starting                 2m13s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m13s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m13s              kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s              kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s              kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m13s              kubelet          Node ha-343000-m03 has been rebooted, boot id: 1683487f-47c5-465d-9b2b-74dea29e28d4
	  Normal   RegisteredNode           2m6s               node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	
	
	Name:               ha-343000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_58_13_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:58:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:59:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.27
	  Hostname:    ha-343000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 25099ec69db34e82bcd2f07d22b80010
	  System UUID:                0c454e5f-0000-0000-8b6f-82e9c2aa82c5
	  Boot ID:                    b76c6143-1924-46d7-b754-0208a6d7ff29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rf4h       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-8hww6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-343000-m04 status is now: NodeHasNoDiskPressure
	  Normal  CIDRAssignmentFailed     10m                cidrAllocator    Node ha-343000-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-343000-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m14s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           2m51s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           2m30s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeNotReady             2m11s              node-controller  Node ha-343000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           2m6s               node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036474] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008025] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.716498] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006721] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.833567] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.343017] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.247177] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.103204] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +1.994098] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +0.255819] systemd-fstab-generator[1114]: Ignoring "noauto" option for root device
	[  +0.098656] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.058515] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.064719] systemd-fstab-generator[1140]: Ignoring "noauto" option for root device
	[  +2.463494] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.126800] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.101663] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.133711] systemd-fstab-generator[1394]: Ignoring "noauto" option for root device
	[  +0.457617] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[  +6.844240] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.300680] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 6 19:06] kauditd_printk_skb: 83 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"warn","ts":"2024-09-06T19:04:56.004501Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:04:56.510489Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:04:56.955363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982261Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000937137s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-06T19:04:56.982656Z","caller":"traceutil/trace.go:171","msg":"trace[219101750] range","detail":"{range_begin:; range_end:; }","duration":"7.001140659s","start":"2024-09-06T19:04:49.981500Z","end":"2024-09-06T19:04:56.982641Z","steps":["trace[219101750] 'agreement among raft nodes before linearized reading'  (duration: 7.000934405s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:04:56.982940Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:04:58.256456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839480Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839529Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842271Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-06T19:04:59.555087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	
	
	==> etcd [8bdc400b3db6] <==
	{"level":"warn","ts":"2024-09-06T19:05:52.476788Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:05:52.476858Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:05:57.477883Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:05:57.477875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:06:02.479112Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:02.479232Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:07.479419Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:07.479730Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:12.480370Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:06:12.480493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:06:17.480683Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:17.480759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:22.480793Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:22.480993Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:06:27.481577Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:06:27.481605Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-06T19:06:32.376901Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.376952Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.377170Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.447537Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"6a6e0aa498652645","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-06T19:06:32.447583Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.448798Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"6a6e0aa498652645","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:06:32.448838Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	
	
	==> kernel <==
	 19:08:44 up 3 min,  0 users,  load average: 0.29, 0.24, 0.10
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c86abdd0a1a3] <==
	I0906 19:08:13.503798       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:23.506087       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:08:23.506252       1 main.go:299] handling current node
	I0906 19:08:23.506301       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:08:23.506464       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:08:23.506745       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:08:23.506837       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:08:23.506970       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:08:23.507014       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:33.504036       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:08:33.504384       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:08:33.504606       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:08:33.504701       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:08:33.504818       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:08:33.504906       1 main.go:299] handling current node
	I0906 19:08:33.504954       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:08:33.505036       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:08:43.504041       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:08:43.504195       1 main.go:299] handling current node
	I0906 19:08:43.504244       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:08:43.504268       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:08:43.504490       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:08:43.504573       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:08:43.504654       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:08:43.504717       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [592c214e97d5] <==
	I0906 19:05:27.461896       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:05:27.465176       1 server.go:142] Version: v1.31.0
	I0906 19:05:27.465213       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.107777       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:05:28.107810       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:05:28.107883       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:05:28.108002       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:05:28.108375       1 instance.go:232] Using reconciler: lease
	W0906 19:05:48.100071       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:05:48.101622       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0906 19:05:48.109302       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9ca63a507d33] <==
	I0906 19:06:00.319954       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0906 19:06:00.329227       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0906 19:06:00.389615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:06:00.399153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:06:00.399318       1 policy_source.go:224] refreshing policies
	I0906 19:06:00.418950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:06:00.418975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:06:00.419196       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:06:00.421841       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:06:00.423174       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:06:00.423547       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:06:00.423580       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:06:00.423586       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:06:00.423589       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:06:00.423592       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:06:00.424202       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:06:00.424372       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:06:00.429383       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0906 19:06:00.444807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.25]
	I0906 19:06:00.446706       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:06:00.460452       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0906 19:06:00.463465       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0906 19:06:00.488387       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:06:01.327320       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:06:01.574034       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.24 192.169.0.25]
	
	
	==> kube-controller-manager [5cc4eed8c219] <==
	I0906 19:05:28.174269       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:05:28.573887       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:05:28.573928       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.585160       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:05:28.585380       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:05:28.585888       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:05:28.586027       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:05:49.113760       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-controller-manager [890baa8f92fc] <==
	I0906 19:06:13.983300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="95.314567ms"
	I0906 19:06:14.017696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.303957ms"
	I0906 19:06:14.018733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.896µs"
	I0906 19:06:14.150501       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:06:14.168151       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:06:14.168284       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:06:30.950379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m03"
	I0906 19:06:31.854707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.582603ms"
	I0906 19:06:31.855323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.326µs"
	I0906 19:06:32.910883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:32.936381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:33.628526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:34.203998       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.91662ms"
	I0906 19:06:34.204272       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.095µs"
	I0906 19:06:37.967819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:37.977034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:38.064780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:06:51.654459       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.601603ms"
	E0906 19:06:51.654893       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-6f6b679f8f\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-6f6b679f8f\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0906 19:06:51.655193       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-l2ztt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-l2ztt\": the object has been modified; please apply your changes to the latest version and try again"
	I0906 19:06:51.655819       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f3f0b4c9-9efd-41cc-93f8-915e2a024362", APIVersion:"v1", ResourceVersion:"297", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-l2ztt EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-l2ztt": the object has been modified; please apply your changes to the latest version and try again
	I0906 19:06:51.657515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="153.079µs"
	I0906 19:06:51.663353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="119.395µs"
	I0906 19:06:51.700669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="26.423367ms"
	I0906 19:06:51.700851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.547µs"
	
	
	==> kube-proxy [803c4f073a4f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:06:13.148913       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:06:13.172780       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 19:06:13.173030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:06:13.214090       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:06:13.214133       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:06:13.214154       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:06:13.217530       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:06:13.218331       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:06:13.218361       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:06:13.222797       1 config.go:197] "Starting service config controller"
	I0906 19:06:13.222930       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:06:13.223035       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:06:13.223104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:06:13.225748       1 config.go:326] "Starting node config controller"
	I0906 19:06:13.225874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:06:13.323124       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:06:13.324280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:06:13.326187       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4d2f47c39f16] <==
	W0906 19:05:56.245160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.245533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:56.734981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.735302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.169.0.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:56.742962       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.743085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.169.0.24:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:56.935930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:56.936032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.301942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.301991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.329279       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.329316       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.169.0.24:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.449839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.449963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:57.924069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:57.924282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.279429       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.279584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.391628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.391680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.574460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.574508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.613456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.613730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	I0906 19:06:06.337934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	W0906 19:04:31.417232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.417325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:31.755428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.755742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:35.986154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:35.986279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.066579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.066654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.563029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.563228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.748870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.749078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:45.521553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:45.521675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:47.041120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:47.041443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:52.540182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:52.540432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:54.069445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:54.069585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	E0906 19:04:59.711524       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0906 19:04:59.712006       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0906 19:04:59.712120       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:04:59.712142       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0906 19:04:59.712922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.397388    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e798ad091c8dbac0977ce3e9539e6296e56adde9095535a7a2e9c7ea74d7777"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.403625    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7ad89fb08b292cfac509e0c383de126da238700a4e5bad8ad55590054381dba"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.416921    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2638e452207351ccfc0f2f01134ae1987de0b7fc1a7d33f66d0fef46e08a1e1"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.548822    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c6d9f178680726c8e102ac2fb994d4e293ad44539b880ca82a1019b4cbf99a"
	Sep 06 19:06:12 ha-343000 kubelet[1561]: I0906 19:06:12.830725    1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01343203b7a509a71640de600f467038bad7b3d1d628993d32a37ee491ef5d1"
	Sep 06 19:06:20 ha-343000 kubelet[1561]: E0906 19:06:20.331039    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:06:20 ha-343000 kubelet[1561]: I0906 19:06:20.393885    1561 scope.go:117] "RemoveContainer" containerID="b3713b7090d8f8af511e66546413a97f331dea488be8efe378a26980838f7cf4"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211095    1561 scope.go:117] "RemoveContainer" containerID="051e748db656a81282f4811bb15ed42555514a115306dfa611e2c0d2af72e345"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211309    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: E0906 19:06:43.211390    1561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9815f44c-20e3-4243-8eb4-60cd42a850ad)\"" pod="kube-system/storage-provisioner" podUID="9815f44c-20e3-4243-8eb4-60cd42a850ad"
	Sep 06 19:06:57 ha-343000 kubelet[1561]: I0906 19:06:57.289715    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:07:20 ha-343000 kubelet[1561]: E0906 19:07:20.331091    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:08:20 ha-343000 kubelet[1561]: E0906 19:08:20.333049    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-343000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-343000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-343000 --control-plane -v=7 --alsologtostderr: (1m18.953417478s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr: exit status 2 (586.630057ms)

                                                
                                                
-- stdout --
	ha-343000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-343000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-343000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-343000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	
	ha-343000-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:10:04.723438   12385 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:10:04.723922   12385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:10:04.723930   12385 out.go:358] Setting ErrFile to fd 2...
	I0906 12:10:04.723940   12385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:10:04.724118   12385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:10:04.724293   12385 out.go:352] Setting JSON to false
	I0906 12:10:04.724316   12385 mustload.go:65] Loading cluster: ha-343000
	I0906 12:10:04.724354   12385 notify.go:220] Checking for updates...
	I0906 12:10:04.724635   12385 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:10:04.724650   12385 status.go:255] checking status of ha-343000 ...
	I0906 12:10:04.725055   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.725133   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.734282   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56535
	I0906 12:10:04.734692   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.735127   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.735139   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.735388   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.735512   12385 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:10:04.735613   12385 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:10:04.735712   12385 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:10:04.736709   12385 status.go:330] ha-343000 host status = "Running" (err=<nil>)
	I0906 12:10:04.736729   12385 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:10:04.736997   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.737018   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.746545   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56537
	I0906 12:10:04.746898   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.747224   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.747244   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.747467   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.747586   12385 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:10:04.747668   12385 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:10:04.747948   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.747974   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.757110   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56539
	I0906 12:10:04.757459   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.757787   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.757800   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.758008   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.758133   12385 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:10:04.758292   12385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:10:04.758313   12385 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:10:04.758407   12385 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:10:04.758500   12385 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:10:04.758614   12385 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:10:04.758706   12385 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:10:04.795670   12385 ssh_runner.go:195] Run: systemctl --version
	I0906 12:10:04.800538   12385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:10:04.811986   12385 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 12:10:04.812010   12385 api_server.go:166] Checking apiserver status ...
	I0906 12:10:04.812049   12385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:10:04.824811   12385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2311/cgroup
	W0906 12:10:04.833419   12385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2311/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:10:04.833507   12385 ssh_runner.go:195] Run: ls
	I0906 12:10:04.837011   12385 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0906 12:10:04.840090   12385 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0906 12:10:04.840102   12385 status.go:422] ha-343000 apiserver status = Running (err=<nil>)
	I0906 12:10:04.840110   12385 status.go:257] ha-343000 status: &{Name:ha-343000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:10:04.840122   12385 status.go:255] checking status of ha-343000-m02 ...
	I0906 12:10:04.840366   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.840385   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.849136   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56543
	I0906 12:10:04.849464   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.849789   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.849800   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.849997   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.850112   12385 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:10:04.850188   12385 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:10:04.850259   12385 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12276
	I0906 12:10:04.851269   12385 status.go:330] ha-343000-m02 host status = "Running" (err=<nil>)
	I0906 12:10:04.851279   12385 host.go:66] Checking if "ha-343000-m02" exists ...
	I0906 12:10:04.851514   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.851534   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.860312   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56545
	I0906 12:10:04.860664   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.861003   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.861015   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.861258   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.861367   12385 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:10:04.861481   12385 host.go:66] Checking if "ha-343000-m02" exists ...
	I0906 12:10:04.861729   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.861754   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.870601   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56547
	I0906 12:10:04.870947   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.871306   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.871318   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.871517   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.871634   12385 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:10:04.871755   12385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:10:04.871766   12385 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:10:04.871842   12385 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:10:04.871925   12385 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:10:04.871997   12385 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:10:04.872067   12385 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:10:04.914322   12385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:10:04.927003   12385 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 12:10:04.927018   12385 api_server.go:166] Checking apiserver status ...
	I0906 12:10:04.927056   12385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:10:04.939905   12385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2021/cgroup
	W0906 12:10:04.947057   12385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2021/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:10:04.947116   12385 ssh_runner.go:195] Run: ls
	I0906 12:10:04.950349   12385 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0906 12:10:04.953431   12385 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0906 12:10:04.953442   12385 status.go:422] ha-343000-m02 apiserver status = Running (err=<nil>)
	I0906 12:10:04.953450   12385 status.go:257] ha-343000-m02 status: &{Name:ha-343000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:10:04.953462   12385 status.go:255] checking status of ha-343000-m03 ...
	I0906 12:10:04.953716   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.953735   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.963471   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56551
	I0906 12:10:04.963807   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.964170   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.964186   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.964397   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.964500   12385 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:10:04.964580   12385 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:10:04.964652   12385 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 12285
	I0906 12:10:04.965640   12385 status.go:330] ha-343000-m03 host status = "Running" (err=<nil>)
	I0906 12:10:04.965651   12385 host.go:66] Checking if "ha-343000-m03" exists ...
	I0906 12:10:04.965921   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.965946   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.974677   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56553
	I0906 12:10:04.975034   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.975405   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.975420   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.975647   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.975765   12385 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:10:04.975850   12385 host.go:66] Checking if "ha-343000-m03" exists ...
	I0906 12:10:04.976124   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:04.976152   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:04.984780   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56555
	I0906 12:10:04.985131   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:04.985503   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:04.985520   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:04.985730   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:04.985857   12385 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:10:04.985986   12385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:10:04.985997   12385 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:10:04.986091   12385 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:10:04.986173   12385 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:10:04.986257   12385 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:10:04.986332   12385 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:10:05.027659   12385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:10:05.038043   12385 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 12:10:05.038058   12385 api_server.go:166] Checking apiserver status ...
	I0906 12:10:05.038098   12385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:10:05.049526   12385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1984/cgroup
	W0906 12:10:05.058036   12385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1984/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:10:05.058093   12385 ssh_runner.go:195] Run: ls
	I0906 12:10:05.061862   12385 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0906 12:10:05.065000   12385 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0906 12:10:05.065013   12385 status.go:422] ha-343000-m03 apiserver status = Running (err=<nil>)
	I0906 12:10:05.065021   12385 status.go:257] ha-343000-m03 status: &{Name:ha-343000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:10:05.065031   12385 status.go:255] checking status of ha-343000-m04 ...
	I0906 12:10:05.066118   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:05.066140   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:05.074860   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56559
	I0906 12:10:05.075242   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:05.075571   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:05.075580   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:05.075787   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:05.075904   12385 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:10:05.075985   12385 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:10:05.076071   12385 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 12301
	I0906 12:10:05.077033   12385 status.go:330] ha-343000-m04 host status = "Running" (err=<nil>)
	I0906 12:10:05.077051   12385 host.go:66] Checking if "ha-343000-m04" exists ...
	I0906 12:10:05.077287   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:05.077311   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:05.085938   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56561
	I0906 12:10:05.086265   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:05.086571   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:05.086582   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:05.086776   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:05.086898   12385 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:10:05.086995   12385 host.go:66] Checking if "ha-343000-m04" exists ...
	I0906 12:10:05.087266   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:05.087290   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:05.096057   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56563
	I0906 12:10:05.096398   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:05.096761   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:05.096778   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:05.096996   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:05.097099   12385 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:10:05.097243   12385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:10:05.097255   12385 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:10:05.097342   12385 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:10:05.097425   12385 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:10:05.097519   12385 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:10:05.097605   12385 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:10:05.130917   12385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:10:05.141474   12385 status.go:257] ha-343000-m04 status: &{Name:ha-343000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:10:05.141490   12385 status.go:255] checking status of ha-343000-m05 ...
	I0906 12:10:05.142416   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:05.142440   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:05.151152   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56566
	I0906 12:10:05.151482   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:05.151795   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:05.151806   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:05.152037   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:05.152149   12385 main.go:141] libmachine: (ha-343000-m05) Calling .GetState
	I0906 12:10:05.152232   12385 main.go:141] libmachine: (ha-343000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:10:05.152305   12385 main.go:141] libmachine: (ha-343000-m05) DBG | hyperkit pid from json: 12370
	I0906 12:10:05.153283   12385 status.go:330] ha-343000-m05 host status = "Running" (err=<nil>)
	I0906 12:10:05.153292   12385 host.go:66] Checking if "ha-343000-m05" exists ...
	I0906 12:10:05.153528   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:05.153558   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:05.162185   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56568
	I0906 12:10:05.162518   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:05.162878   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:05.162894   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:05.163107   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:05.163212   12385 main.go:141] libmachine: (ha-343000-m05) Calling .GetIP
	I0906 12:10:05.163290   12385 host.go:66] Checking if "ha-343000-m05" exists ...
	I0906 12:10:05.163555   12385 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:10:05.163576   12385 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:10:05.172351   12385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56570
	I0906 12:10:05.172694   12385 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:10:05.173042   12385 main.go:141] libmachine: Using API Version  1
	I0906 12:10:05.173065   12385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:10:05.173269   12385 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:10:05.173380   12385 main.go:141] libmachine: (ha-343000-m05) Calling .DriverName
	I0906 12:10:05.173509   12385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:10:05.173520   12385 main.go:141] libmachine: (ha-343000-m05) Calling .GetSSHHostname
	I0906 12:10:05.173595   12385 main.go:141] libmachine: (ha-343000-m05) Calling .GetSSHPort
	I0906 12:10:05.173674   12385 main.go:141] libmachine: (ha-343000-m05) Calling .GetSSHKeyPath
	I0906 12:10:05.173757   12385 main.go:141] libmachine: (ha-343000-m05) Calling .GetSSHUsername
	I0906 12:10:05.173831   12385 sshutil.go:53] new ssh client: &{IP:192.169.0.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m05/id_rsa Username:docker}
	I0906 12:10:05.207398   12385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:10:05.219441   12385 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 12:10:05.219460   12385 api_server.go:166] Checking apiserver status ...
	I0906 12:10:05.219500   12385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:10:05.233168   12385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1874/cgroup
	W0906 12:10:05.240949   12385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1874/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:10:05.241006   12385 ssh_runner.go:195] Run: ls
	I0906 12:10:05.244301   12385 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0906 12:10:05.247355   12385 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0906 12:10:05.247366   12385 status.go:422] ha-343000-m05 apiserver status = Running (err=<nil>)
	I0906 12:10:05.247374   12385 status.go:257] ha-343000-m05 status: &{Name:ha-343000-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:613: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (3.309720306s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	| node    | ha-343000 node delete m03 -v=7                                                                                               | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-343000 stop -v=7                                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT | 06 Sep 24 12:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true                                                                                                     | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:05 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-343000                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:08 PDT | 06 Sep 24 12:10 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:05:01
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:05:01.821113   12253 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:05:01.821396   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821403   12253 out.go:358] Setting ErrFile to fd 2...
	I0906 12:05:01.821407   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821585   12253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:05:01.822962   12253 out.go:352] Setting JSON to false
	I0906 12:05:01.845482   12253 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11072,"bootTime":1725638429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:05:01.845567   12253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:05:01.867344   12253 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:05:01.909192   12253 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:05:01.909251   12253 notify.go:220] Checking for updates...
	I0906 12:05:01.951681   12253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:01.972896   12253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:05:01.993997   12253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:05:02.014915   12253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:05:02.036376   12253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:05:02.058842   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:02.059362   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.059426   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.069603   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56303
	I0906 12:05:02.069962   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.070394   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.070407   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.070602   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.070721   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.070905   12253 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:05:02.071152   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.071173   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.079785   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56305
	I0906 12:05:02.080100   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.080480   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.080508   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.080753   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.080876   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.109151   12253 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:05:02.151203   12253 start.go:297] selected driver: hyperkit
	I0906 12:05:02.151225   12253 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gv
isor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.151398   12253 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:05:02.151526   12253 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.151681   12253 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:05:02.160708   12253 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:05:02.164397   12253 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.164417   12253 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:05:02.167034   12253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:05:02.167076   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:02.167082   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:02.167157   12253 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.167283   12253 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.209167   12253 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:05:02.230210   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:02.230284   12253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:05:02.230304   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:02.230523   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:02.230539   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:02.230657   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.231246   12253 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:02.231321   12253 start.go:364] duration metric: took 58.855µs to acquireMachinesLock for "ha-343000"
	I0906 12:05:02.231338   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:02.231348   12253 fix.go:54] fixHost starting: 
	I0906 12:05:02.231579   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.231602   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.240199   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56307
	I0906 12:05:02.240538   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.240898   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.240906   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.241115   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.241241   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.241344   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:02.241429   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.241509   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:05:02.242441   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 12107 missing from process table
	I0906 12:05:02.242473   12253 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:05:02.242488   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:05:02.242570   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:02.285299   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:05:02.308252   12253 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:05:02.308536   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.308568   12253 main.go:141] libmachine: (ha-343000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:05:02.308690   12253 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:05:02.418778   12253 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:05:02.418805   12253 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:02.418989   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419036   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419095   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:02.419142   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:02.419160   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:02.420829   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Pid is 12266
	I0906 12:05:02.421178   12253 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:05:02.421194   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.421256   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:02.422249   12253 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:05:02.422316   12253 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:02.422340   12253 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66db525c}
	I0906 12:05:02.422356   12253 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:05:02.422371   12253 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:05:02.422430   12253 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:05:02.423159   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:02.423357   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.423787   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:02.423798   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.423945   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:02.424057   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:02.424240   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424491   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:02.424632   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:02.424882   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:02.424892   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:02.428574   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:02.479264   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:02.479938   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.479953   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.479971   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.479984   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.867700   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:02.867715   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:02.983045   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.983079   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.983090   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.983110   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.983957   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:02.983967   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:08.596032   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:05:08.596072   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:05:08.596081   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:05:08.620302   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:05:13.496727   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:13.496743   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.496887   12253 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:05:13.496898   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.497005   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.497091   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.497190   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497290   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497391   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.497515   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.497658   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.497666   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:05:13.573506   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:05:13.573525   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.573649   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.573744   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573841   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573933   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.574054   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.574199   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.574210   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:13.646449   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:13.646474   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:13.646492   12253 buildroot.go:174] setting up certificates
	I0906 12:05:13.646500   12253 provision.go:84] configureAuth start
	I0906 12:05:13.646506   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.646647   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:13.646742   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.646835   12253 provision.go:143] copyHostCerts
	I0906 12:05:13.646872   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.646964   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:13.646972   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.647092   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:13.647297   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647337   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:13.647342   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647419   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:13.647566   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647604   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:13.647609   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647688   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:13.647833   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:05:13.694032   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:13.694082   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:13.694097   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.694208   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.694294   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.694394   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.694509   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:13.734054   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:13.734119   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:13.754153   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:13.754219   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 12:05:13.773776   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:13.773840   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:05:13.793258   12253 provision.go:87] duration metric: took 146.744964ms to configureAuth
	I0906 12:05:13.793272   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:13.793440   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:13.793455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:13.793596   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.793699   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.793786   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793872   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793955   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.794076   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.794207   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.794215   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:13.860967   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:13.860981   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:13.861068   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:13.861082   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.861205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.861297   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861411   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861521   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.861683   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.861822   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.861868   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:13.937805   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:13.937827   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.937964   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.938080   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938295   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.938419   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.938558   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.938571   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:15.619728   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:15.619742   12253 machine.go:96] duration metric: took 13.195921245s to provisionDockerMachine
	I0906 12:05:15.619754   12253 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:05:15.619762   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:15.619772   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.619950   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:15.619966   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.620058   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.620154   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.620257   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.620337   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.660028   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:15.663309   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:15.663323   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:15.663418   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:15.663631   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:15.663638   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:15.663848   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:15.671393   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:15.691128   12253 start.go:296] duration metric: took 71.364923ms for postStartSetup
	I0906 12:05:15.691156   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.691327   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:15.691341   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.691453   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.691544   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.691628   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.691712   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.732095   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:15.732157   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:15.785220   12253 fix.go:56] duration metric: took 13.553838389s for fixHost
	I0906 12:05:15.785242   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.785373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.785462   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785558   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785650   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.785774   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:15.785926   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:15.785933   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:15.851168   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649515.950195219
	
	I0906 12:05:15.851179   12253 fix.go:216] guest clock: 1725649515.950195219
	I0906 12:05:15.851184   12253 fix.go:229] Guest: 2024-09-06 12:05:15.950195219 -0700 PDT Remote: 2024-09-06 12:05:15.785232 -0700 PDT m=+13.999000936 (delta=164.963219ms)
	I0906 12:05:15.851205   12253 fix.go:200] guest clock delta is within tolerance: 164.963219ms
	I0906 12:05:15.851209   12253 start.go:83] releasing machines lock for "ha-343000", held for 13.619855055s
	I0906 12:05:15.851228   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851359   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:15.851455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851761   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851860   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851943   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:15.851974   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852006   12253 ssh_runner.go:195] Run: cat /version.json
	I0906 12:05:15.852029   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852070   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852126   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852163   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852217   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852273   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852292   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852391   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.852414   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.945582   12253 ssh_runner.go:195] Run: systemctl --version
	I0906 12:05:15.950518   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:05:15.954710   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:15.954750   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:15.972724   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:15.972739   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:15.972842   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:15.997626   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:16.009969   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:16.021002   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.021063   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:16.029939   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.039024   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:16.047772   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.056625   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:16.065543   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:16.074247   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:16.082976   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:16.091738   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:16.099691   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:16.107701   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.207522   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:16.227285   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:16.227363   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:16.242536   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.255682   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:16.272770   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.283410   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.293777   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:16.316221   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.326357   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:16.341265   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:16.344224   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:16.351341   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:16.364686   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:16.462680   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:16.567102   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.567167   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:16.581141   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.682906   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:19.018795   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.33586105s)
	I0906 12:05:19.018863   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:19.029907   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:19.042839   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.053183   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:19.161103   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:19.269627   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.376110   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:19.389292   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.400498   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.508773   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:19.574293   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:19.574369   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:19.578648   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:19.578702   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:19.581725   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:19.611289   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:19.611360   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.628755   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.690349   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:19.690435   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:19.690798   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:19.695532   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.705484   12253 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:05:19.705569   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:19.705619   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.718680   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.718691   12253 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:05:19.718764   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.731988   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.732008   12253 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:05:19.732017   12253 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:05:19.732095   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:19.732160   12253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:05:19.769790   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:19.769810   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:19.769820   12253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:05:19.769836   12253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:05:19.769924   12253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:05:19.769938   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:19.769993   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:19.783021   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:19.783091   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:19.783139   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:19.790731   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:19.790780   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:05:19.798087   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:05:19.811294   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:19.826571   12253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:05:19.840214   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:19.853805   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:19.856803   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.866597   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.969582   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:19.984116   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:05:19.984128   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:19.984139   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:19.984324   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:19.984402   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:19.984413   12253 certs.go:256] generating profile certs ...
	I0906 12:05:19.984529   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:19.984611   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:05:19.984683   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:19.984690   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:19.984715   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:19.984733   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:19.984750   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:19.984767   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:19.984795   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:19.984823   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:19.984846   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:19.984950   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:19.984995   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:19.985004   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:19.985045   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:19.985074   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:19.985102   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:19.985164   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:19.985201   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:19.985223   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:19.985241   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:19.985738   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:20.016977   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:20.040002   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:20.074896   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:20.096785   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:20.117992   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:20.152101   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:20.181980   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:20.249104   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:20.310747   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:20.334377   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:20.354759   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:05:20.368573   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:20.372727   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:20.381943   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385218   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385254   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.389369   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:20.398370   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:20.407468   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410735   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410769   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.414896   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:20.423953   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:20.432893   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436127   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436161   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.440280   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:20.449469   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:20.452834   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:20.457085   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:20.461715   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:20.466070   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:20.470282   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:20.474449   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:20.478690   12253 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:20.478796   12253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:05:20.491888   12253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:05:20.500336   12253 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:05:20.500348   12253 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:05:20.500388   12253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:05:20.508605   12253 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:05:20.508923   12253 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.509004   12253 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:05:20.509222   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.509871   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.510072   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:05:20.510389   12253 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:05:20.510569   12253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:05:20.518433   12253 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:05:20.518445   12253 kubeadm.go:597] duration metric: took 18.093623ms to restartPrimaryControlPlane
	I0906 12:05:20.518450   12253 kubeadm.go:394] duration metric: took 39.76917ms to StartCluster
	I0906 12:05:20.518463   12253 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.518535   12253 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.518965   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.519194   12253 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:20.519207   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:05:20.519217   12253 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:05:20.519329   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.562952   12253 out.go:177] * Enabled addons: 
	I0906 12:05:20.584902   12253 addons.go:510] duration metric: took 65.689522ms for enable addons: enabled=[]
	I0906 12:05:20.584940   12253 start.go:246] waiting for cluster config update ...
	I0906 12:05:20.584973   12253 start.go:255] writing updated cluster config ...
	I0906 12:05:20.608171   12253 out.go:201] 
	I0906 12:05:20.630349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.630488   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.652951   12253 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:05:20.695164   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:20.695203   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:20.695405   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:20.695421   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:20.695517   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.696367   12253 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:20.696454   12253 start.go:364] duration metric: took 67.794µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:05:20.696472   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:20.696479   12253 fix.go:54] fixHost starting: m02
	I0906 12:05:20.696771   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:20.696805   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:20.705845   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56329
	I0906 12:05:20.706183   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:20.706528   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:20.706543   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:20.706761   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:20.706875   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.706980   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:05:20.707064   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.707136   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:05:20.708055   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.708088   12253 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:05:20.708098   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:05:20.708185   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:20.734735   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:05:20.776747   12253 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:05:20.777073   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.777115   12253 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:05:20.778701   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.778717   12253 main.go:141] libmachine: (ha-343000-m02) DBG | pid 12118 is in state "Stopped"
	I0906 12:05:20.778778   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:05:20.779095   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:05:20.806950   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:05:20.806972   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:20.807155   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807233   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807304   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:20.807361   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:20.807374   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:20.808851   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Pid is 12276
	I0906 12:05:20.809435   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:05:20.809451   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.809514   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12276
	I0906 12:05:20.811081   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:05:20.811162   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:20.811181   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:05:20.811209   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca2f2}
	I0906 12:05:20.811220   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:05:20.811238   12253 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:05:20.811245   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:05:20.811904   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:20.812111   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.812569   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:20.812582   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.812711   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:20.812849   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:20.812941   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813131   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:20.813262   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:20.813401   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:20.813411   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:20.817160   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:20.825311   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:20.826263   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:20.826278   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:20.826305   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:20.826316   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.214947   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:21.214961   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:21.329668   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:21.329695   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:21.329711   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:21.329721   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.330549   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:21.330560   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:26.960134   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:05:26.960175   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:05:26.960183   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:05:26.984271   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:05:30.128139   12253 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.25:22: connect: connection refused
	I0906 12:05:33.191918   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:33.191932   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192104   12253 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:05:33.192113   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192203   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.192293   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.192374   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192456   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192573   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.192685   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.192834   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.192848   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:05:33.271080   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:05:33.271107   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.271242   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.271343   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271432   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271517   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.271653   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.271816   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.271828   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:33.340749   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:33.340766   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:33.340776   12253 buildroot.go:174] setting up certificates
	I0906 12:05:33.340781   12253 provision.go:84] configureAuth start
	I0906 12:05:33.340788   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.340917   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:33.341015   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.341102   12253 provision.go:143] copyHostCerts
	I0906 12:05:33.341127   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341183   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:33.341189   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341303   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:33.341481   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341516   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:33.341521   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341626   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:33.341793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341824   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:33.341829   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341902   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:33.342105   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:05:33.430053   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:33.430099   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:33.430112   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.430247   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.430337   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.430424   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.430498   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:33.468786   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:33.468854   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:33.488429   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:33.488502   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:05:33.507788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:33.507853   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:05:33.527149   12253 provision.go:87] duration metric: took 186.359429ms to configureAuth
	I0906 12:05:33.527164   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:33.527349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:33.527363   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:33.527493   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.527581   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.527670   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527752   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527834   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.527941   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.528081   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.528089   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:33.592983   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:33.592995   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:33.593066   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:33.593077   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.593197   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.593303   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593392   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593487   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.593630   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.593775   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.593821   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:33.669226   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:33.669253   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.669404   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.669513   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669628   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669726   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.669876   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.670026   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.670038   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:35.327313   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:35.327328   12253 machine.go:96] duration metric: took 14.51472045s to provisionDockerMachine
	I0906 12:05:35.327335   12253 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:05:35.327345   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:35.327357   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.327550   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:35.327564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.327658   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.327737   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.327824   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.327895   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.374953   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:35.380104   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:35.380118   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:35.380209   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:35.380346   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:35.380353   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:35.380535   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:35.392904   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:35.425316   12253 start.go:296] duration metric: took 97.970334ms for postStartSetup
	I0906 12:05:35.425336   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.425510   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:35.425521   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.425611   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.425700   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.425784   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.425866   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.465210   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:35.465270   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:35.519276   12253 fix.go:56] duration metric: took 14.822763667s for fixHost
	I0906 12:05:35.519322   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.519466   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.519564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519682   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519766   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.519897   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:35.520049   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:35.520058   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:35.586671   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649535.517793561
	
	I0906 12:05:35.586682   12253 fix.go:216] guest clock: 1725649535.517793561
	I0906 12:05:35.586690   12253 fix.go:229] Guest: 2024-09-06 12:05:35.517793561 -0700 PDT Remote: 2024-09-06 12:05:35.519294 -0700 PDT m=+33.733024449 (delta=-1.500439ms)
	I0906 12:05:35.586700   12253 fix.go:200] guest clock delta is within tolerance: -1.500439ms
	I0906 12:05:35.586703   12253 start.go:83] releasing machines lock for "ha-343000-m02", held for 14.890212868s
	I0906 12:05:35.586719   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.586869   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:35.609959   12253 out.go:177] * Found network options:
	I0906 12:05:35.631361   12253 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:05:35.652026   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.652053   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652675   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652820   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652904   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:35.652927   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:05:35.652986   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.653055   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:05:35.653068   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.653078   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653249   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653283   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653371   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653405   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653519   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653550   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.653617   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:05:35.689663   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:35.689725   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:35.741169   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:35.741183   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.741249   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:35.756280   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:35.765285   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:35.774250   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:35.774298   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:35.783141   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.792103   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:35.800998   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.809931   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:35.818930   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:35.828100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:35.837011   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:35.846071   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:35.854051   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:35.862225   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:35.953449   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:35.973036   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.973102   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:35.989701   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.002119   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:36.020969   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.032323   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.043370   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:36.064919   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.076134   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:36.091185   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:36.094041   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:36.101975   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:36.115524   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:36.210477   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:36.307446   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:36.307474   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:36.321506   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:36.425142   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:38.743512   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31834803s)
	I0906 12:05:38.743573   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:38.754689   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:38.767595   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:38.778550   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:38.871803   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:38.967444   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.077912   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:39.091499   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:39.102647   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.199868   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:39.269396   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:39.269473   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:39.274126   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:39.274176   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:39.279526   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:39.307628   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:39.307702   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.324272   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.363496   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:39.384323   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:05:39.405031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:39.405472   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:39.410152   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:39.420507   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:05:39.420684   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:39.420907   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.420932   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.430101   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56352
	I0906 12:05:39.430438   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.430796   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.430812   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.431028   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.431139   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:39.431212   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:39.431285   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:39.432244   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:05:39.432496   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.432518   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.441251   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56354
	I0906 12:05:39.441578   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.441903   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.441918   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.442138   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.442248   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:39.442348   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.25
	I0906 12:05:39.442355   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:39.442365   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:39.442516   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:39.442578   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:39.442588   12253 certs.go:256] generating profile certs ...
	I0906 12:05:39.442681   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:39.442772   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.7390dc12
	I0906 12:05:39.442830   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:39.442838   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:39.442859   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:39.442879   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:39.442896   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:39.442915   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:39.442951   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:39.442970   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:39.442987   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:39.443067   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:39.443106   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:39.443114   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:39.443147   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:39.443183   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:39.443212   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:39.443276   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:39.443310   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.443336   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.443355   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.443381   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:39.443473   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:39.443566   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:39.443662   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:39.443742   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:39.474601   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:05:39.477773   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:05:39.486087   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:05:39.489291   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:05:39.497797   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:05:39.500976   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:05:39.508902   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:05:39.512097   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:05:39.522208   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:05:39.529029   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:05:39.538558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:05:39.541788   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:05:39.551255   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:39.571163   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:39.590818   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:39.610099   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:39.629618   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:39.649203   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:39.668940   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:39.688319   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:39.707568   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:39.727593   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:39.746946   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:39.766191   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:05:39.779761   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:05:39.793389   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:05:39.807028   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:05:39.820798   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:05:39.834428   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:05:39.848169   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:05:39.861939   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:39.866268   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:39.875520   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878895   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878936   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.883242   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:39.892394   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:39.901475   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904880   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904919   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.909164   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:39.918366   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:39.927561   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.930968   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.931005   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.935325   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:39.944442   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:39.947919   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:39.952225   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:39.956510   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:39.960794   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:39.965188   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:39.969546   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:39.973805   12253 kubeadm.go:934] updating node {m02 192.169.0.25 8443 v1.31.0 docker true true} ...
	I0906 12:05:39.973869   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:39.973885   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:39.973920   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:39.987092   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:39.987133   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:39.987182   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:39.995535   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:39.995584   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:05:40.003762   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:05:40.017266   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:40.030719   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:40.044348   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:40.047310   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:40.057546   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.156340   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.171403   12253 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:40.171578   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:40.192574   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:05:40.213457   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.344499   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.359579   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:40.359776   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:05:40.359813   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:05:40.359973   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:05:40.360058   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:40.360063   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:40.360071   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:40.360075   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:47.989850   12253 round_trippers.go:574] Response Status:  in 7629 milliseconds
	I0906 12:05:48.990862   12253 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990895   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:48.990902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:48.990922   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:49.992764   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:49.992860   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:56357->192.169.0.24:8443: read: connection reset by peer
	I0906 12:05:49.992914   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:49.992923   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:49.992931   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:49.992938   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:50.992884   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:50.992985   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:50.992993   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:50.993001   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:50.993007   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:51.994156   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:51.994218   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:51.994272   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:51.994282   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:51.994293   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:51.994300   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:52.994610   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:52.994678   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:52.994684   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:52.994690   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:52.994695   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:53.996452   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:53.996513   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:53.996568   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:53.996577   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:53.996587   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:53.996600   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:54.996281   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:54.996431   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:54.996445   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:54.996456   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:54.996470   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997732   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:55.997791   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:55.997834   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:55.997841   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:55.997848   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997855   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998659   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:56.998737   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:56.998743   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:56.998748   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998753   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:57.998704   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:57.998768   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:57.998824   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:57.998830   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:57.998841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:57.998847   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.234879   12253 round_trippers.go:574] Response Status: 200 OK in 2236 milliseconds
	I0906 12:06:00.235584   12253 node_ready.go:49] node "ha-343000-m02" has status "Ready":"True"
	I0906 12:06:00.235597   12253 node_ready.go:38] duration metric: took 19.875567395s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:06:00.235604   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:00.235643   12253 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:06:00.235653   12253 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:06:00.235696   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:00.235701   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.235707   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.235711   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.262088   12253 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0906 12:06:00.268356   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.268408   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:00.268414   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.268421   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.268427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.271139   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.271625   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.271633   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.271638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.271642   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.273753   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.274136   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.274144   12253 pod_ready.go:82] duration metric: took 5.774893ms for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274150   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274179   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:00.274184   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.274189   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.274192   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.275924   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.276344   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.276351   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.276355   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.276360   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278001   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.278322   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.278329   12253 pod_ready.go:82] duration metric: took 4.174121ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278335   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278363   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:00.278368   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.278373   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278379   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.280145   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.280523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.280530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.280535   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.280540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.282107   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.282477   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.282486   12253 pod_ready.go:82] duration metric: took 4.146745ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282492   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282522   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.282528   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.282534   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.282537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.284223   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.284663   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.284670   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.284676   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.284679   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.286441   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.782726   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.782751   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.782796   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.782807   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.786175   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:00.786692   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.786700   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.786706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.786710   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.788874   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.283655   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.283671   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.283678   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.283683   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.285985   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.286465   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.286473   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.286481   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.286485   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.288565   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.782633   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.782651   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.782659   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.782664   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.785843   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:01.786296   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.786304   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.786309   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.786314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.788345   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.788771   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.788779   12253 pod_ready.go:82] duration metric: took 1.506279407s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788786   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788823   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:01.788828   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.788833   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.788838   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.790798   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:01.791160   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:01.791171   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.791184   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.791187   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.793250   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.793611   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.793620   12253 pod_ready.go:82] duration metric: took 4.828788ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.793631   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.837481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:01.837495   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.837504   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.837509   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.840718   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.037469   12253 request.go:632] Waited for 196.356353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037506   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037512   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.037520   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.037525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.040221   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.040550   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:02.040560   12253 pod_ready.go:82] duration metric: took 246.922589ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.040567   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.237374   12253 request.go:632] Waited for 196.770161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237419   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.237436   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.237442   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.240098   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.437383   12253 request.go:632] Waited for 196.723319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437429   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.437443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.437449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.440277   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.636447   12253 request.go:632] Waited for 94.227022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636509   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636516   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.636524   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.636528   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.640095   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.837639   12253 request.go:632] Waited for 197.104367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837707   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837717   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.837763   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.837788   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.841651   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:03.040768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.040781   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.040789   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.040793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.043403   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:03.236506   12253 request.go:632] Waited for 192.559607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236606   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236618   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.236631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.236637   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.240751   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.540928   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.540954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.540973   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.540980   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.545016   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.637802   12253 request.go:632] Waited for 92.404425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637881   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637890   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.637902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.637910   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.642163   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.041768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.041794   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.041804   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.041813   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.046193   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.047251   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.047260   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.047266   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.047277   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.056137   12253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 12:06:04.056428   12253 pod_ready.go:103] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:04.541406   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.541425   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.541434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.541439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.544224   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:04.544684   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.544691   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.544697   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.544707   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.547090   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.040907   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:05.040922   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.040930   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.040934   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.044733   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.045134   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:05.045143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.045149   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.045152   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.047168   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.047571   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.047581   12253 pod_ready.go:82] duration metric: took 3.007003521s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047587   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047621   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:05.047626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.047631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.047636   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.049432   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:05.236368   12253 request.go:632] Waited for 186.419986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236497   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236514   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.236525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.236532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.239828   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.240204   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.240214   12253 pod_ready.go:82] duration metric: took 192.620801ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.240220   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.435846   12253 request.go:632] Waited for 195.558833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435906   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.435914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.435921   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.438946   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.636650   12253 request.go:632] Waited for 197.107158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636711   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636719   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.636728   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.636733   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.639926   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.640212   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.640221   12253 pod_ready.go:82] duration metric: took 399.995302ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.640232   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.837401   12253 request.go:632] Waited for 197.103806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837486   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.837513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.837523   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.840662   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.035821   12253 request.go:632] Waited for 194.603254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035950   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.035962   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.035968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.039252   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.039561   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.039571   12253 pod_ready.go:82] duration metric: took 399.332528ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.039578   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.236804   12253 request.go:632] Waited for 197.127943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236841   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236849   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.236856   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.236861   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.239571   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.435983   12253 request.go:632] Waited for 195.836904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436083   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436095   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.436107   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.436115   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.440028   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.440297   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.440306   12253 pod_ready.go:82] duration metric: took 400.722778ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.440313   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.635911   12253 request.go:632] Waited for 195.558637ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635989   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635997   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.636005   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.636009   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.638766   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.836563   12253 request.go:632] Waited for 197.42239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836630   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836640   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.836651   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.836656   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.840182   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.840437   12253 pod_ready.go:93] pod "kube-proxy-8hww6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.840446   12253 pod_ready.go:82] duration metric: took 400.127213ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.840453   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.036000   12253 request.go:632] Waited for 195.50345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036078   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.036093   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.036101   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.039960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.237550   12253 request.go:632] Waited for 197.186932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237618   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237627   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.237638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.237645   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.241824   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:07.242186   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.242196   12253 pod_ready.go:82] duration metric: took 401.736827ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.242202   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.437080   12253 request.go:632] Waited for 194.824311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437120   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437127   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.437134   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.437177   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.439746   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:07.636668   12253 request.go:632] Waited for 196.435868ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636764   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636773   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.636784   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.636790   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.640555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.640971   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.640979   12253 pod_ready.go:82] duration metric: took 398.771488ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.640986   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.837782   12253 request.go:632] Waited for 196.72045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837885   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837895   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.837913   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.841222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.037474   12253 request.go:632] Waited for 195.707367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037543   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037551   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.037559   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.037564   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.041008   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.237863   12253 request.go:632] Waited for 96.589125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238009   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238027   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.238039   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.238064   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.241278   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.436102   12253 request.go:632] Waited for 194.439362ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436137   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.436151   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.436183   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.439043   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:08.642356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.642376   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.642388   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.642397   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.645933   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.837859   12253 request.go:632] Waited for 191.363155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837895   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837900   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.837911   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.841081   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.141167   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.141182   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.141191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.141195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.144158   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.235895   12253 request.go:632] Waited for 91.258445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235957   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.235972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.235977   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.239065   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.641494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.641508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.641517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.641521   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.644350   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.644757   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.644765   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.644771   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.644774   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.647091   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.647426   12253 pod_ready.go:103] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:10.141899   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:10.141923   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.141934   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.141941   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.145540   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.145973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.145981   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.145987   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.145989   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.148176   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.148538   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.148547   12253 pod_ready.go:82] duration metric: took 2.507551998s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.148554   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.235772   12253 request.go:632] Waited for 87.183047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235805   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235811   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.235831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.235849   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.238046   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.437551   12253 request.go:632] Waited for 199.151796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437619   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.437643   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.437648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.440639   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.440964   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.440974   12253 pod_ready.go:82] duration metric: took 292.414078ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.440981   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.636354   12253 request.go:632] Waited for 195.279783ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636426   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636437   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.636450   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.636456   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.641024   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:10.836907   12253 request.go:632] Waited for 195.513588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.836991   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.837001   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.837012   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.837020   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.840787   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.841194   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.841203   12253 pod_ready.go:82] duration metric: took 400.216153ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.841209   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.036390   12253 request.go:632] Waited for 195.137597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036488   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.036510   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.036517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.040104   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:11.236464   12253 request.go:632] Waited for 195.741522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.236507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.236513   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.244008   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:11.244389   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:11.244399   12253 pod_ready.go:82] duration metric: took 403.184015ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.244409   12253 pod_ready.go:39] duration metric: took 11.008775818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:11.244428   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:11.244490   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:11.260044   12253 api_server.go:72] duration metric: took 31.088552933s to wait for apiserver process to appear ...
	I0906 12:06:11.260057   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:11.260076   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:11.268665   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:11.268720   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:11.268725   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.268730   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.268734   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.269258   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:11.269330   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:11.269341   12253 api_server.go:131] duration metric: took 9.279203ms to wait for apiserver health ...
	I0906 12:06:11.269351   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:11.436974   12253 request.go:632] Waited for 167.586901ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437022   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437029   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.437043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.437047   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.441302   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:11.447157   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:11.447183   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447192   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447198   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.447201   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.447204   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.447208   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447211   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.447214   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.447218   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447223   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.447228   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.447232   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.447237   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.447241   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.447244   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.447247   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.447253   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.447258   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.447264   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.447268   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.447270   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.447273   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.447276   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.447294   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.447303   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.447308   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.447313   12253 system_pods.go:74] duration metric: took 177.956833ms to wait for pod list to return data ...
	I0906 12:06:11.447319   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:11.637581   12253 request.go:632] Waited for 190.208152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637651   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637657   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.637664   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.637668   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.650462   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:11.650666   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:11.650678   12253 default_sa.go:55] duration metric: took 203.353142ms for default service account to be created ...
	I0906 12:06:11.650687   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:11.837096   12253 request.go:632] Waited for 186.371823ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837128   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837134   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.837139   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.837143   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.866992   12253 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0906 12:06:11.873145   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:11.873167   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873175   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873181   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.873185   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.873188   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.873195   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873199   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.873202   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.873206   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873211   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.873215   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.873219   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.873223   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.873227   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.873231   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.873233   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.873236   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.873240   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.873244   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.873247   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.873252   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.873256   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.873259   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.873262   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.873265   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.873268   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.873274   12253 system_pods.go:126] duration metric: took 222.581886ms to wait for k8s-apps to be running ...
	I0906 12:06:11.873283   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:11.873340   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:11.886025   12253 system_svc.go:56] duration metric: took 12.733456ms WaitForService to wait for kubelet
	I0906 12:06:11.886050   12253 kubeadm.go:582] duration metric: took 31.714560483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:11.886086   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:12.036232   12253 request.go:632] Waited for 150.073414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036268   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036273   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:12.036286   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:12.036290   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:12.048789   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:12.049838   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049855   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049868   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049873   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049876   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049881   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049884   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049893   12253 node_conditions.go:105] duration metric: took 163.797553ms to run NodePressure ...
	I0906 12:06:12.049902   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:12.049922   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:12.087274   12253 out.go:201] 
	I0906 12:06:12.123635   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:12.123705   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.161370   12253 out.go:177] * Starting "ha-343000-m03" control-plane node in "ha-343000" cluster
	I0906 12:06:12.219408   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:12.219442   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:12.219591   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:12.219605   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:12.219694   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.220349   12253 start.go:360] acquireMachinesLock for ha-343000-m03: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:12.220455   12253 start.go:364] duration metric: took 68.753µs to acquireMachinesLock for "ha-343000-m03"
	I0906 12:06:12.220476   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:12.220482   12253 fix.go:54] fixHost starting: m03
	I0906 12:06:12.220813   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:12.220843   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:12.230327   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56369
	I0906 12:06:12.230794   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:12.231264   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:12.231284   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:12.231543   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:12.231691   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.231816   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:06:12.231923   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.232050   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:06:12.233006   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.233040   12253 fix.go:112] recreateIfNeeded on ha-343000-m03: state=Stopped err=<nil>
	I0906 12:06:12.233052   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	W0906 12:06:12.233162   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:12.271360   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m03" ...
	I0906 12:06:12.312281   12253 main.go:141] libmachine: (ha-343000-m03) Calling .Start
	I0906 12:06:12.312472   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.312588   12253 main.go:141] libmachine: (ha-343000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid
	I0906 12:06:12.314085   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.314111   12253 main.go:141] libmachine: (ha-343000-m03) DBG | pid 10460 is in state "Stopped"
	I0906 12:06:12.314145   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid...
	I0906 12:06:12.314314   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Using UUID 5abf6194-a669-4f35-b6fc-c88bfc629e81
	I0906 12:06:12.392247   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Generated MAC 3e:84:3d:bc:9c:31
	I0906 12:06:12.392279   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:12.392453   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392498   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392570   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5abf6194-a669-4f35-b6fc-c88bfc629e81", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:12.392621   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5abf6194-a669-4f35-b6fc-c88bfc629e81 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:12.392631   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:12.394468   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Pid is 12285
	I0906 12:06:12.395082   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Attempt 0
	I0906 12:06:12.395129   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.395296   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 12285
	I0906 12:06:12.398168   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Searching for 3e:84:3d:bc:9c:31 in /var/db/dhcpd_leases ...
	I0906 12:06:12.398286   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:12.398303   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:12.398316   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:12.398325   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:12.398339   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:06:12.398359   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found match: 3e:84:3d:bc:9c:31
	I0906 12:06:12.398382   12253 main.go:141] libmachine: (ha-343000-m03) DBG | IP: 192.169.0.26
	I0906 12:06:12.398414   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetConfigRaw
	I0906 12:06:12.399172   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:12.399462   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.400029   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:12.400042   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.400184   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:12.400344   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:12.400464   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400591   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400728   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:12.400904   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:12.401165   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:12.401176   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:12.404210   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:12.438119   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:12.439198   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.439227   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.439241   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.439256   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.845267   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:12.845282   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:12.960204   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.960224   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.960244   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.960258   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.961041   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:12.961054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:06:18.729819   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:06:18.729887   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:06:18.729898   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:06:18.753054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:06:23.465534   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:06:23.465548   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465717   12253 buildroot.go:166] provisioning hostname "ha-343000-m03"
	I0906 12:06:23.465726   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465818   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.465902   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.465981   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466055   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466146   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.466265   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.466412   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.466421   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m03 && echo "ha-343000-m03" | sudo tee /etc/hostname
	I0906 12:06:23.536843   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m03
	
	I0906 12:06:23.536860   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.536985   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.537079   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537171   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537236   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.537354   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.537507   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.537525   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:06:23.606665   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:06:23.606681   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:06:23.606695   12253 buildroot.go:174] setting up certificates
	I0906 12:06:23.606700   12253 provision.go:84] configureAuth start
	I0906 12:06:23.606707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.606846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:23.606946   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.607022   12253 provision.go:143] copyHostCerts
	I0906 12:06:23.607051   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607104   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:06:23.607112   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607235   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:06:23.607441   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607476   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:06:23.607482   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607552   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:06:23.607719   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607747   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:06:23.607752   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607836   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:06:23.607981   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m03 san=[127.0.0.1 192.169.0.26 ha-343000-m03 localhost minikube]
	I0906 12:06:23.699873   12253 provision.go:177] copyRemoteCerts
	I0906 12:06:23.699921   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:06:23.699935   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.700077   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.700175   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.700270   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.700376   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:23.737703   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:06:23.737771   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:06:23.757756   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:06:23.757827   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:06:23.777598   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:06:23.777673   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:06:23.797805   12253 provision.go:87] duration metric: took 191.09552ms to configureAuth
	I0906 12:06:23.797818   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:06:23.797988   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:23.798002   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:23.798134   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.798231   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.798314   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798400   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798488   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.798597   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.798724   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.798732   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:06:23.860492   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:06:23.860504   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:06:23.860586   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:06:23.860599   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.860730   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.860807   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.860907   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.861010   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.861140   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.861285   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.861332   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:06:23.935021   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:06:23.935039   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.935186   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.935286   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935371   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935478   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.935609   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.935750   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.935762   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:06:25.580352   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:06:25.580366   12253 machine.go:96] duration metric: took 13.180301802s to provisionDockerMachine
	I0906 12:06:25.580373   12253 start.go:293] postStartSetup for "ha-343000-m03" (driver="hyperkit")
	I0906 12:06:25.580380   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:06:25.580394   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.580572   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:06:25.580585   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.580672   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.580761   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.580846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.580931   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.621691   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:06:25.626059   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:06:25.626069   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:06:25.626156   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:06:25.626292   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:06:25.626299   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:06:25.626479   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:06:25.640080   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:25.666256   12253 start.go:296] duration metric: took 85.87411ms for postStartSetup
	I0906 12:06:25.666279   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.666455   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:06:25.666469   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.666570   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.666655   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.666734   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.666815   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.704275   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:06:25.704337   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:06:25.737458   12253 fix.go:56] duration metric: took 13.516946704s for fixHost
	I0906 12:06:25.737482   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.737626   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.737732   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737832   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737920   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.738049   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:25.738192   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:25.738199   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:06:25.803149   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649585.904544960
	
	I0906 12:06:25.803162   12253 fix.go:216] guest clock: 1725649585.904544960
	I0906 12:06:25.803168   12253 fix.go:229] Guest: 2024-09-06 12:06:25.90454496 -0700 PDT Remote: 2024-09-06 12:06:25.737472 -0700 PDT m=+83.951104505 (delta=167.07296ms)
	I0906 12:06:25.803178   12253 fix.go:200] guest clock delta is within tolerance: 167.07296ms
	I0906 12:06:25.803182   12253 start.go:83] releasing machines lock for "ha-343000-m03", held for 13.582690615s
	I0906 12:06:25.803198   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.803329   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:25.825405   12253 out.go:177] * Found network options:
	I0906 12:06:25.846508   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25
	W0906 12:06:25.867569   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.867608   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.867639   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868819   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:06:25.868894   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	W0906 12:06:25.868907   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.868930   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.869032   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:06:25.869046   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.869089   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869194   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869217   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869337   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869358   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869516   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.869640   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	W0906 12:06:25.904804   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:06:25.904860   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:06:25.953607   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:06:25.953623   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:25.953707   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:25.969069   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:06:25.977320   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:06:25.985732   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:06:25.985790   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:06:25.994169   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.002564   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:06:26.011076   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.019409   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:06:26.027829   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:06:26.036100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:06:26.044789   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:06:26.053382   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:06:26.060878   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:06:26.068234   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.161656   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:06:26.180419   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:26.180540   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:06:26.197783   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.208495   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:06:26.223788   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.234758   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.245879   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:06:26.268201   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.279748   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:26.298675   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:06:26.301728   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:06:26.309959   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:06:26.323781   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:06:26.418935   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:06:26.520404   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:06:26.520429   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:06:26.534785   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.635772   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:06:28.931869   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296074778s)
	I0906 12:06:28.931929   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:06:28.943824   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:06:28.959441   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:28.970674   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:06:29.066042   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:06:29.168956   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.286202   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:06:29.299988   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:29.311495   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.429259   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:06:29.496621   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:06:29.496705   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:06:29.502320   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:06:29.502374   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:06:29.505587   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:06:29.534004   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:06:29.534083   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.551834   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.590600   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:06:29.632268   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:06:29.653333   12253 out.go:177]   - env NO_PROXY=192.169.0.24,192.169.0.25
	I0906 12:06:29.674153   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:29.674373   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:06:29.677525   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:29.687202   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:06:29.687389   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:29.687610   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.687639   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.696472   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56391
	I0906 12:06:29.696894   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.697234   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.697246   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.697502   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.697641   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:06:29.697736   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:29.697809   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:06:29.698794   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:06:29.699046   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.699070   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.707791   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56393
	I0906 12:06:29.708136   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.708457   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.708468   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.708696   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.708812   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:06:29.708911   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.26
	I0906 12:06:29.708917   12253 certs.go:194] generating shared ca certs ...
	I0906 12:06:29.708928   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:06:29.709069   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:06:29.709123   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:06:29.709132   12253 certs.go:256] generating profile certs ...
	I0906 12:06:29.709257   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:06:29.709340   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.e464bc73
	I0906 12:06:29.709394   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:06:29.709401   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:06:29.709422   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:06:29.709447   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:06:29.709465   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:06:29.709482   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:06:29.709510   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:06:29.709528   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:06:29.709550   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:06:29.709623   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:06:29.709661   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:06:29.709669   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:06:29.709702   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:06:29.709732   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:06:29.709766   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:06:29.709833   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:29.709868   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:06:29.709889   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:06:29.709908   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:29.709932   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:06:29.710030   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:06:29.710110   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:06:29.710211   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:06:29.710304   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:06:29.742607   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:06:29.746569   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:06:29.754558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:06:29.757841   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:06:29.765881   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:06:29.769140   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:06:29.778234   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:06:29.781483   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:06:29.789701   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:06:29.792877   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:06:29.801155   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:06:29.804562   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:06:29.812907   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:06:29.833527   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:06:29.854042   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:06:29.874274   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:06:29.894675   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:06:29.914759   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:06:29.935020   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:06:29.955774   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:06:29.976174   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:06:29.996348   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:06:30.016705   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:06:30.036752   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:06:30.050816   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:06:30.064469   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:06:30.078121   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:06:30.092155   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:06:30.106189   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:06:30.120313   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:06:30.134091   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:06:30.138549   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:06:30.147484   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151103   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151157   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.155470   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:06:30.164282   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:06:30.173035   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176736   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176783   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.181161   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:06:30.189862   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:06:30.198669   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202224   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202268   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.206651   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:06:30.215322   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:06:30.218903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:06:30.223374   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:06:30.227903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:06:30.232564   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:06:30.237667   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:06:30.242630   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:06:30.247576   12253 kubeadm.go:934] updating node {m03 192.169.0.26 8443 v1.31.0 docker true true} ...
	I0906 12:06:30.247652   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:06:30.247670   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:06:30.247719   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:06:30.261197   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:06:30.261239   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:06:30.261300   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:06:30.269438   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:06:30.269496   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:06:30.277362   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:06:30.291520   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:06:30.305340   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:06:30.319495   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:06:30.322637   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:30.332577   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.441240   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.456369   12253 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:06:30.456602   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:30.477910   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:06:30.498557   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.628440   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.645947   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:06:30.646165   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:06:30.646208   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:06:30.646371   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.646412   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:30.646417   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.646423   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.646427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649121   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:30.649426   12253 node_ready.go:49] node "ha-343000-m03" has status "Ready":"True"
	I0906 12:06:30.649435   12253 node_ready.go:38] duration metric: took 3.055625ms for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.649441   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:30.649480   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:30.649485   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.649491   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.655093   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:30.660461   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:30.660533   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:30.660539   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.660545   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.660550   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.664427   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:30.664864   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:30.664872   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.664877   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.664880   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.667569   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.161508   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.161522   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.161528   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.161531   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.164411   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.165052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.165061   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.165070   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.165074   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.167897   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.660843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.660861   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.660868   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.660871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.668224   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:31.668938   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.668954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.668969   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.668987   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.674737   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:32.161451   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.161468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.161496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.161501   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.164555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.165061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.165069   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.165075   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.165078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.167689   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.661269   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.661285   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.661294   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.661316   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.664943   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.665460   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.665469   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.665475   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.665479   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.667934   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.668229   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:33.161930   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.161964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.161971   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.161975   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.165689   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.166478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.166488   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.166497   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.166503   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.169565   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.660809   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.660831   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.660841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.660846   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.664137   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.665061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.665071   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.665078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.665099   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.667811   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.161378   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.161391   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.161398   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.161403   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.165094   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:34.165523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.165531   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.165537   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.165540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.167949   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.661206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.661222   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.661228   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.661230   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.663772   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.664499   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.664507   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.664513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.664517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.666543   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.161667   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.161689   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.161700   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.161705   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.166875   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.167311   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.167319   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.167324   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.167328   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.172902   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.173323   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:35.661973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.661988   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.661994   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.661998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.664583   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.664981   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.664989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.664998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.665001   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.667322   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.161747   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.161785   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.161793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.161796   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.164939   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.165450   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.165459   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.165464   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.165474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.167808   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.661492   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.661508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.661532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.661537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.664941   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.665455   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.665464   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.665471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.665474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.668192   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.161660   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.161678   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.161685   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.161688   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.164012   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.164541   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.164549   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.164555   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.164558   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.166577   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.662457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.662494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.662505   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.662511   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.665311   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.666039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.666048   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.666053   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.666056   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.668294   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.668600   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:38.162628   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.162646   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.162654   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.162659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.165660   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.166284   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.166292   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.166298   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.166301   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.168559   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.662170   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.662185   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.662191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.662195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.664733   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.665194   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.665207   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.665211   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.667563   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.161491   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.161508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.161517   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.161522   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.164370   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.164762   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.164770   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.164776   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.164780   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.166614   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:39.661843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.661860   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.661866   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.661871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.664287   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.664950   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.664958   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.664964   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.664968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.667194   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.160891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.160921   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.160933   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.160955   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.165388   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:40.166039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.166047   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.166052   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.166055   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.168212   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.168635   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:40.661892   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.661907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.661914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.661917   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.664471   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.664962   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.664970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.664975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.664984   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.667379   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.160779   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.160797   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.160824   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.160830   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.163878   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:41.164433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.164441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.164446   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.164451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.166991   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.661124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.661138   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.661145   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.661149   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.663595   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.664206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.664214   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.664220   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.664224   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.666219   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:42.161906   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.161926   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.161937   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.161945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.165222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:42.165752   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.165760   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.165765   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.165769   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.167913   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.661255   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.661274   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.661282   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.661288   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.664242   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.664689   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.664697   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.664703   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.664706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.666742   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.667053   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:43.161512   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.161530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.161565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.161575   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.164590   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:43.165234   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.165242   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.165254   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.165258   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.167961   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.660826   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.660844   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.660873   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.660882   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.663557   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.663959   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.663966   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.663972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.663976   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.665816   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.162103   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.162133   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.162158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.162164   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.165060   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.165598   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.165606   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.165612   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.165615   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.167589   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.662307   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.662328   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.662339   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.662344   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.665063   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.665602   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.665610   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.665615   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.665619   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.667607   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.667948   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:45.161277   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.161307   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.161314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.161317   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.163751   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.164201   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.164209   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.164215   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.164217   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.166274   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.662080   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.662099   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.662106   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.662110   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.664692   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.665145   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.665152   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.665158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.665162   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.667158   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.161983   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.162002   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.162011   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.162016   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.165135   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.165638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.165645   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.165650   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.165654   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.167660   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.660973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.661022   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.661036   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.661046   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.664600   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.665041   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.665051   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.665056   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.665061   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.667006   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:47.161827   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.161883   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.161895   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.161902   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.165549   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:47.166029   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.166037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.166041   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.166045   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.168233   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:47.168577   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:47.661554   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.661603   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.661616   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.661625   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.665796   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:47.666259   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.666266   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.666272   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.666276   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.668466   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.161876   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.161891   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.161898   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.161901   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.164419   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.164835   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.164843   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.164849   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.164853   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.166837   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:48.661562   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.661577   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.661598   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.661603   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.663972   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.664457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.664465   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.664470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.664475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.666445   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:49.161410   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.161430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.161438   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.161443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.164478   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:49.164982   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.164989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.164995   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.164998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.167071   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.660698   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.660724   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.660736   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.660742   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.664916   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:49.665349   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.665357   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.665363   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.665367   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.667392   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.667753   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:50.161030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.161065   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.161073   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.161080   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.163537   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.163963   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.163970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.163975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.163979   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.166093   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.661184   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.661238   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.661263   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.661267   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.663637   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.664117   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.664125   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.664131   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.664134   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.666067   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.161515   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.161550   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.161557   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.161561   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.163979   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.164681   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.164690   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.164694   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.164697   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.166790   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.661266   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.661291   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.661374   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.661387   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.664772   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:51.665195   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.665206   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.665216   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.667400   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.667769   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.667779   12253 pod_ready.go:82] duration metric: took 21.007261829s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667785   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667821   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:51.667826   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.667831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.667836   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.669791   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.670205   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.670213   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.670218   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.670221   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.672346   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.672671   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.672679   12253 pod_ready.go:82] duration metric: took 4.889471ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672685   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672718   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:51.672723   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.672729   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.672737   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.674649   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.675030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.675037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.675043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.675046   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.676915   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.677288   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.677297   12253 pod_ready.go:82] duration metric: took 4.607311ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677303   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677339   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:51.677344   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.677349   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.677352   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.679418   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.679897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:51.679907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.679916   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.679920   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.681919   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.682327   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.682336   12253 pod_ready.go:82] duration metric: took 5.028149ms for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682343   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682376   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:51.682381   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.682386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.682389   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.684781   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.685200   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:51.685207   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.685212   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.685215   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.687181   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.687676   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.687685   12253 pod_ready.go:82] duration metric: took 5.337542ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.687696   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.862280   12253 request.go:632] Waited for 174.544275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862360   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862372   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.862382   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.862386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.865455   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.062085   12253 request.go:632] Waited for 196.080428ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.062136   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.062140   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.064928   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.065322   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.065331   12253 pod_ready.go:82] duration metric: took 377.628905ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.065338   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.261393   12253 request.go:632] Waited for 196.009549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261459   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261471   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.261485   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.261492   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.265336   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.461317   12253 request.go:632] Waited for 195.311084ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461362   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.461370   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.461376   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.464202   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.464645   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.464654   12253 pod_ready.go:82] duration metric: took 399.309786ms for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.464661   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.662233   12253 request.go:632] Waited for 197.535092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662290   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662297   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.662305   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.662311   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.665143   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.862031   12253 request.go:632] Waited for 196.411368ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862119   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.862140   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.862145   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.866136   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.866533   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.866543   12253 pod_ready.go:82] duration metric: took 401.876526ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.866550   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.061387   12253 request.go:632] Waited for 194.796135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061453   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061462   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.061470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.061476   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.064293   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:53.261526   12253 request.go:632] Waited for 196.74771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261649   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.261659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.261674   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.265603   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.266028   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.266036   12253 pod_ready.go:82] duration metric: took 399.480241ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.266042   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.461478   12253 request.go:632] Waited for 195.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461556   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461564   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.461571   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.461576   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.464932   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.661907   12253 request.go:632] Waited for 196.48537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661965   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661991   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.661998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.662002   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.665079   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.665555   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.665565   12253 pod_ready.go:82] duration metric: took 399.515968ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.665572   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.861347   12253 request.go:632] Waited for 195.73444ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861414   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861426   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.861434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.861439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.864177   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:54.061465   12253 request.go:632] Waited for 196.861398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061517   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061554   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.061565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.061570   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.064700   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.065020   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.065030   12253 pod_ready.go:82] duration metric: took 399.451485ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.065037   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.263289   12253 request.go:632] Waited for 198.174584ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263384   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263411   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.263436   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.263461   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.266722   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.461554   12253 request.go:632] Waited for 194.387224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461599   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461609   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.461620   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.461627   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.465162   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.465533   12253 pod_ready.go:98] node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465543   12253 pod_ready.go:82] duration metric: took 400.500434ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	E0906 12:06:54.465549   12253 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465555   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.662665   12253 request.go:632] Waited for 197.074891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662731   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662740   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.662749   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.662755   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.665777   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.862800   12253 request.go:632] Waited for 196.680356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862911   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862924   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.862936   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.862945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.866911   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.867361   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.867371   12253 pod_ready.go:82] duration metric: took 401.810264ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.867377   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.062512   12253 request.go:632] Waited for 195.060729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062609   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062629   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.062641   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.062648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.066272   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:55.263362   12253 request.go:632] Waited for 196.717271ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263483   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.263507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.263520   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.268072   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.268453   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.268462   12253 pod_ready.go:82] duration metric: took 401.079128ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.268469   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.462230   12253 request.go:632] Waited for 193.721938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462312   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462320   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.462348   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.462357   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.465173   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:55.662089   12253 request.go:632] Waited for 196.464134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662239   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662255   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.662267   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.662275   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.666427   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.666704   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.666714   12253 pod_ready.go:82] duration metric: took 398.240112ms for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.666721   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.861681   12253 request.go:632] Waited for 194.913797ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861767   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861778   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.861790   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.861799   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.865874   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:56.063343   12253 request.go:632] Waited for 197.091674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063491   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.063501   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.063508   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.067298   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.067689   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.067699   12253 pod_ready.go:82] duration metric: took 400.971333ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.067706   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.261328   12253 request.go:632] Waited for 193.578385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261416   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261431   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.261443   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.261451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.264964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.461367   12253 request.go:632] Waited for 196.051039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.461449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.461454   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.464367   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:56.464786   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.464799   12253 pod_ready.go:82] duration metric: took 397.083037ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.464806   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.662171   12253 request.go:632] Waited for 197.309952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662326   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662340   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.662352   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.662363   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.665960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.862106   12253 request.go:632] Waited for 195.559257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862214   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862225   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.862236   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.862243   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.866072   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.866312   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.866321   12253 pod_ready.go:82] duration metric: took 401.509457ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.866329   12253 pod_ready.go:39] duration metric: took 26.216828833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:56.866341   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:56.866386   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:56.878910   12253 api_server.go:72] duration metric: took 26.422463192s to wait for apiserver process to appear ...
	I0906 12:06:56.878922   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:56.878935   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:56.883745   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:56.883791   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:56.883796   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.883803   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.883808   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.884469   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:56.884556   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:56.884568   12253 api_server.go:131] duration metric: took 5.641059ms to wait for apiserver health ...
	I0906 12:06:56.884573   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:57.061374   12253 request.go:632] Waited for 176.731786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.061480   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.061487   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.066391   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:57.071924   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:57.071938   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.071942   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.071945   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.071948   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.071952   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.071955   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.071958   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.071962   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.071964   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.071967   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.071973   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.071977   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.071979   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.071982   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.071985   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.071988   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.071991   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.071993   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.071996   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.071999   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.072001   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.072004   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.072007   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.072009   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.072012   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.072017   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.072022   12253 system_pods.go:74] duration metric: took 187.444826ms to wait for pod list to return data ...
	I0906 12:06:57.072029   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:57.261398   12253 request.go:632] Waited for 189.325312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261443   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261451   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.261471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.261475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.264018   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:57.264078   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:57.264086   12253 default_sa.go:55] duration metric: took 192.051635ms for default service account to be created ...
	I0906 12:06:57.264103   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:57.461307   12253 request.go:632] Waited for 197.162907ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461342   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461347   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.461367   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.461393   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.466559   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:57.471959   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:57.471969   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.471974   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.471977   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.471981   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.471985   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.471989   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.471992   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.471994   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.471997   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.472000   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.472003   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.472006   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.472009   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.472012   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.472015   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.472017   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.472020   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.472023   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.472026   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.472029   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.472031   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.472034   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.472037   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.472040   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.472043   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.472047   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.472052   12253 system_pods.go:126] duration metric: took 207.94336ms to wait for k8s-apps to be running ...
	I0906 12:06:57.472059   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:57.472107   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:57.483773   12253 system_svc.go:56] duration metric: took 11.709185ms WaitForService to wait for kubelet
	I0906 12:06:57.483792   12253 kubeadm.go:582] duration metric: took 27.027343725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:57.483805   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:57.662348   12253 request.go:632] Waited for 178.494779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662425   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.662448   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.662457   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.665964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:57.666853   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666864   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666872   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666875   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666879   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666882   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666885   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666892   12253 node_conditions.go:105] duration metric: took 183.082589ms to run NodePressure ...
	I0906 12:06:57.666899   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:57.666913   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:57.689595   12253 out.go:201] 
	I0906 12:06:57.710968   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:57.711085   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.733311   12253 out.go:177] * Starting "ha-343000-m04" worker node in "ha-343000" cluster
	I0906 12:06:57.776497   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:57.776531   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:57.776758   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:57.776776   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:57.776887   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.777953   12253 start.go:360] acquireMachinesLock for ha-343000-m04: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:57.778066   12253 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "ha-343000-m04"
	I0906 12:06:57.778091   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:57.778100   12253 fix.go:54] fixHost starting: m04
	I0906 12:06:57.778535   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:57.778560   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:57.788011   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56397
	I0906 12:06:57.788364   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:57.788747   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:57.788763   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:57.789004   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:57.789119   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.789216   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:06:57.789290   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.789388   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:06:57.790320   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:06:57.790346   12253 fix.go:112] recreateIfNeeded on ha-343000-m04: state=Stopped err=<nil>
	I0906 12:06:57.790354   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	W0906 12:06:57.790423   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:57.811236   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m04" ...
	I0906 12:06:57.853317   12253 main.go:141] libmachine: (ha-343000-m04) Calling .Start
	I0906 12:06:57.853695   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.853752   12253 main.go:141] libmachine: (ha-343000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid
	I0906 12:06:57.853833   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Using UUID 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5
	I0906 12:06:57.879995   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Generated MAC 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.880018   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:57.880162   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880191   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880277   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:57.880319   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:57.880330   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:57.881745   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Pid is 12301
	I0906 12:06:57.882213   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Attempt 0
	I0906 12:06:57.882229   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.882285   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 12301
	I0906 12:06:57.884227   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Searching for 6a:d8:ba:fa:e9:e7 in /var/db/dhcpd_leases ...
	I0906 12:06:57.884329   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:57.884344   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:06:57.884361   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:57.884375   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:57.884400   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:57.884406   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetConfigRaw
	I0906 12:06:57.884413   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found match: 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.884464   12253 main.go:141] libmachine: (ha-343000-m04) DBG | IP: 192.169.0.27
	I0906 12:06:57.885084   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:06:57.885308   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.885947   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:57.885958   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.886118   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:06:57.886263   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:06:57.886401   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886518   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886625   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:06:57.886755   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:57.886913   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:06:57.886920   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:57.890225   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:57.898506   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:57.900023   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:57.900046   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:57.900059   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:57.900081   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.292623   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:58.292638   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:58.407402   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:58.407425   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:58.407438   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:58.407462   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.408295   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:58.408305   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:07:04.116677   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:07:04.116760   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:07:04.116771   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:07:04.140349   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:07:32.960229   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:07:32.960245   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960393   12253 buildroot.go:166] provisioning hostname "ha-343000-m04"
	I0906 12:07:32.960404   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960498   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:32.960578   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:32.960651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960733   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960822   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:32.960938   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:32.961089   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:32.961097   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m04 && echo "ha-343000-m04" | sudo tee /etc/hostname
	I0906 12:07:33.029657   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m04
	
	I0906 12:07:33.029671   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.029803   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.029895   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.029994   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.030077   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.030212   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.030354   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.030365   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:07:33.094966   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:07:33.094982   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:07:33.094992   12253 buildroot.go:174] setting up certificates
	I0906 12:07:33.094999   12253 provision.go:84] configureAuth start
	I0906 12:07:33.095005   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:33.095148   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:33.095261   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.095345   12253 provision.go:143] copyHostCerts
	I0906 12:07:33.095383   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095445   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:07:33.095451   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095595   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:07:33.095788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095828   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:07:33.095833   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095913   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:07:33.096069   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096123   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:07:33.096133   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096216   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:07:33.096362   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m04 san=[127.0.0.1 192.169.0.27 ha-343000-m04 localhost minikube]
	I0906 12:07:33.148486   12253 provision.go:177] copyRemoteCerts
	I0906 12:07:33.148536   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:07:33.148551   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.148688   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.148785   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.148886   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.148968   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:33.184847   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:07:33.184925   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:07:33.204793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:07:33.204868   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:07:33.225189   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:07:33.225262   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:07:33.245047   12253 provision.go:87] duration metric: took 150.030083ms to configureAuth
	I0906 12:07:33.245064   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:07:33.245233   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:07:33.245264   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:33.245394   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.245474   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.245563   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245656   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245735   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.245857   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.245998   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.246006   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:07:33.305766   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:07:33.305779   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:07:33.305852   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:07:33.305865   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.305998   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.306097   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306198   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306282   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.306410   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.306555   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.306603   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:07:33.377062   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	Environment=NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:07:33.377081   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.377218   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.377309   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377395   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377470   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.377595   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.377731   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.377745   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:07:34.969419   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:07:34.969435   12253 machine.go:96] duration metric: took 37.07976383s to provisionDockerMachine
	I0906 12:07:34.969443   12253 start.go:293] postStartSetup for "ha-343000-m04" (driver="hyperkit")
	I0906 12:07:34.969451   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:07:34.969464   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:34.969653   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:07:34.969667   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:34.969755   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:34.969839   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:34.969938   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:34.970026   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.005883   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:07:35.009124   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:07:35.009135   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:07:35.009234   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:07:35.009411   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:07:35.009418   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:07:35.009642   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:07:35.017147   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:07:35.037468   12253 start.go:296] duration metric: took 68.014068ms for postStartSetup
	I0906 12:07:35.037488   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.037659   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:07:35.037673   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.037762   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.037851   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.037939   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.038032   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.073675   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:07:35.073738   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:07:35.107246   12253 fix.go:56] duration metric: took 37.325422655s for fixHost
	I0906 12:07:35.107273   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.107423   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.107527   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107605   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107700   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.107824   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:35.107967   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:35.107979   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:07:35.169429   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649655.267789382
	
	I0906 12:07:35.169443   12253 fix.go:216] guest clock: 1725649655.267789382
	I0906 12:07:35.169449   12253 fix.go:229] Guest: 2024-09-06 12:07:35.267789382 -0700 PDT Remote: 2024-09-06 12:07:35.107262 -0700 PDT m=+153.317111189 (delta=160.527382ms)
	I0906 12:07:35.169466   12253 fix.go:200] guest clock delta is within tolerance: 160.527382ms
	I0906 12:07:35.169472   12253 start.go:83] releasing machines lock for "ha-343000-m04", held for 37.387671405s
	I0906 12:07:35.169494   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.169634   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:35.192021   12253 out.go:177] * Found network options:
	I0906 12:07:35.212912   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	W0906 12:07:35.233597   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233618   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233628   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.233643   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234159   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234366   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234455   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:07:35.234491   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	W0906 12:07:35.234542   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234565   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234576   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.234648   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:07:35.234651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234665   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.234826   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234871   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235007   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235056   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235182   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235206   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.235315   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	W0906 12:07:35.268496   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:07:35.268557   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:07:35.318514   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:07:35.318528   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.318592   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.333874   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:07:35.343295   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:07:35.352492   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.352552   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:07:35.361630   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.370668   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:07:35.379741   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.389143   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:07:35.398542   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:07:35.407763   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:07:35.416819   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:07:35.426383   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:07:35.434689   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:07:35.442821   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:35.546285   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:07:35.565383   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.565458   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:07:35.587708   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.599182   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:07:35.618394   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.629619   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.640716   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:07:35.663169   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.673665   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.688883   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:07:35.691747   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:07:35.698972   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:07:35.712809   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:07:35.816741   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:07:35.926943   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.926972   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:07:35.942083   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:36.036699   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:08:37.056745   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01976389s)
	I0906 12:08:37.056810   12253 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:08:37.092348   12253 out.go:201] 
	W0906 12:08:37.113034   12253 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:07:33 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388087675Z" level=info msg="Starting up"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388874857Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.389448447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.406541023Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421511237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421602459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421668995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421705837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421880023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421931200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422075608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422118185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422150327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422179563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422320644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422541368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424094220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424143575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424295349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424338381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424460558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424511586Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425636722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425688205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425727379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425760048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425791193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425860087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426020444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426094135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426129732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426167338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426204356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426237806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426268346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426298666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426328562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426358230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426389211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426418321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426456445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426487889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426516746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426546507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426578999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426618589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426715802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426750125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426780114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426818663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426851076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426879866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426909029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426949139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426988055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427021053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427049769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427133633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427177682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427207151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427236043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427298115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427372740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427431600Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427611432Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427700568Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427760941Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427803687Z" level=info msg="containerd successfully booted in 0.022207s"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.407865115Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.420336385Z" level=info msg="Loading containers: start."
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.515687290Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.987987334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.032534306Z" level=info msg="Loading containers: done."
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.046984897Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.047174717Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066396312Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066609197Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:07:35 ha-343000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.147371084Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:07:36 ha-343000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.149138373Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.151983630Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152081675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152156440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:37 ha-343000-m04 dockerd[1111]: time="2024-09-06T19:07:37.182746438Z" level=info msg="Starting up"
	Sep 06 19:08:37 ha-343000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:08:37.113090   12253 out.go:270] * 
	W0906 12:08:37.114019   12253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:08:37.156019   12253 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203311461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203639509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b7ad89fb08b292cfac509e0c383de126da238700a4e5bad8ad55590054381dba/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e01343203b7a509a71640de600f467038bad7b3d1d628993d32a37ee491ef5d1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f2f69bda625f237b44e2bc9af0e9cfd8b05e944b06149fba0d64a3e513338ba1/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607046115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607111680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607122664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607194485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.645965722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646293720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646498986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.648910956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664089064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664361369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664585443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664903965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:42 ha-343000 dockerd[1148]: time="2024-09-06T19:06:42.976990703Z" level=info msg="ignoring event" container=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977534371Z" level=info msg="shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977730802Z" level=warning msg="cleaning up after shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977773534Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339610101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339689283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339702665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.340050558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1a60be55b6a1       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   f2f69bda625f2       storage-provisioner
	0e02b4bf2dbaa       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   e01343203b7a5       busybox-7dff88458-x6w7h
	22c131171f901       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   f2f69bda625f2       storage-provisioner
	803c4f073a4fa       ad83b2ca7b09e                                                                                         3 minutes ago       Running             kube-proxy                1                   b7ad89fb08b29       kube-proxy-x6pfk
	554acd0f20e32       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   a2638e4522073       coredns-6f6b679f8f-q4rhs
	c86abdd0a1a3a       12968670680f4                                                                                         3 minutes ago       Running             kindnet-cni               1                   b2c6d9f178680       kindnet-tj4jx
	d15c1bf38706e       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   9e798ad091c8d       coredns-6f6b679f8f-99jtt
	890baa8f92fc8       045733566833c                                                                                         3 minutes ago       Running             kube-controller-manager   6                   26308c7f15e49       kube-controller-manager-ha-343000
	9ca63a507d338       604f5db92eaa8                                                                                         4 minutes ago       Running             kube-apiserver            6                   70de0991ef26f       kube-apiserver-ha-343000
	5f2ecf46dbad7       38af8ddebf499                                                                                         4 minutes ago       Running             kube-vip                  1                   1804cca78c5d0       kube-vip-ha-343000
	4d2f47c39f165       1766f54c897f0                                                                                         4 minutes ago       Running             kube-scheduler            2                   df0b4d2f0d771       kube-scheduler-ha-343000
	592c214e97d5c       604f5db92eaa8                                                                                         4 minutes ago       Exited              kube-apiserver            5                   70de0991ef26f       kube-apiserver-ha-343000
	8bdc400b3db6d       2e96e5913fc06                                                                                         4 minutes ago       Running             etcd                      2                   83808e05f091c       etcd-ha-343000
	5cc4eed8c219e       045733566833c                                                                                         4 minutes ago       Exited              kube-controller-manager   5                   26308c7f15e49       kube-controller-manager-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         9 minutes ago       Exited              kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         9 minutes ago       Exited              kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         9 minutes ago       Exited              etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         14 minutes ago      Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [554acd0f20e3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37373 - 8840 "HINFO IN 6495643642992279060.3361092094518909540. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011184519s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[237904971]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.794) (total time: 30004ms):
	Trace[237904971]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:42.797)
	Trace[237904971]: [30.004464183s] [30.004464183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[660143257]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.798) (total time: 30000ms):
	Trace[660143257]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:42.799)
	Trace[660143257]: [30.000893558s] [30.000893558s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[380072670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30007ms):
	Trace[380072670]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.797)
	Trace[380072670]: [30.007427279s] [30.007427279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d15c1bf38706] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54176 - 21158 "HINFO IN 3457232632200313932.3905864345721771129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010437248s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1587501409]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.793) (total time: 30005ms):
	Trace[1587501409]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.798)
	Trace[1587501409]: [30.005577706s] [30.005577706s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[680749614]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30005ms):
	Trace[680749614]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:06:42.798)
	Trace[680749614]: [30.005762488s] [30.005762488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1474873071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.799) (total time: 30001ms):
	Trace[1474873071]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:42.800)
	Trace[1474873071]: [30.001544995s] [30.001544995s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-343000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_55_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.24
	  Hostname:    ha-343000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6523db55e885482e8ac62c2082b7e4e8
	  System UUID:                36fe47a6-0000-0000-a226-e026237c9096
	  Boot ID:                    a6ec27d4-119e-4645-b472-4cbf4d3b3af4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6w7h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-99jtt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-q4rhs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-343000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-tj4jx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-343000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-343000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-x6pfk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-343000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-343000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m53s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           14m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-343000 status is now: NodeReady
	  Normal  RegisteredNode           13m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           3m30s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           25s                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	
	
	Name:               ha-343000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.25
	  Hostname:    ha-343000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 01c58e04d4304f6f9c11ce89f0bbf71d
	  System UUID:                2c7446f3-0000-0000-9664-55c72aec5dea
	  Boot ID:                    d9c8abd7-e4ec-46d0-892f-bd1bfa22eaef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jk74s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-343000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-5rtpx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-343000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-343000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-zjx8z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-343000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-343000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m57s                  kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Warning  Rebooted                 10m                    kubelet          Node ha-343000-m02 has been rebooted, boot id: 9a70d273-2199-426f-b35f-a9b4075cc0d7
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           3m54s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           3m30s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	
	
	Name:               ha-343000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_57_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:57:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.26
	  Hostname:    ha-343000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da881992752a4b679c6a5b2a9f0cdfbb
	  System UUID:                5abf4f35-0000-0000-b6fc-c88bfc629e81
	  Boot ID:                    1683487f-47c5-465d-9b2b-74dea29e28d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2kj2b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-343000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-ksnvp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-343000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-343000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-r285j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-343000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-343000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m32s              kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           4m15s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           3m54s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   Starting                 3m37s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m37s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m37s              kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m37s              kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m37s              kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m37s              kubelet          Node ha-343000-m03 has been rebooted, boot id: 1683487f-47c5-465d-9b2b-74dea29e28d4
	  Normal   RegisteredNode           3m30s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           25s                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	
	
	Name:               ha-343000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_58_13_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:58:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:59:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.27
	  Hostname:    ha-343000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 25099ec69db34e82bcd2f07d22b80010
	  System UUID:                0c454e5f-0000-0000-8b6f-82e9c2aa82c5
	  Boot ID:                    b76c6143-1924-46d7-b754-0208a6d7ff29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rf4h       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-8hww6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-343000-m04 status is now: NodeHasNoDiskPressure
	  Normal  CIDRAssignmentFailed     11m                cidrAllocator    Node ha-343000-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           11m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeReady                11m                kubelet          Node ha-343000-m04 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           4m15s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           3m54s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeNotReady             3m35s              node-controller  Node ha-343000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m30s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           25s                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	
	
	Name:               ha-343000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T12_09_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:09:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.28
	  Hostname:    ha-343000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 c35084f0124f46c6bde88f6228c41b21
	  System UUID:                b7ce4581-0000-0000-b100-3eaa6ce1c90b
	  Boot ID:                    1be229f7-63d3-4854-87de-793aff331e2a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-343000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         31s
	  kube-system                 kindnet-f4mts                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      33s
	  kube-system                 kube-apiserver-ha-343000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-ha-343000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-7xrbs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-ha-343000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-vip-ha-343000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s (x8 over 34s)  kubelet          Node ha-343000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 34s)  kubelet          Node ha-343000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x7 over 34s)  kubelet          Node ha-343000-m05 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	  Normal  RegisteredNode           30s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	  Normal  RegisteredNode           25s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036474] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008025] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.716498] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006721] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.833567] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.343017] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.247177] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.103204] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +1.994098] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +0.255819] systemd-fstab-generator[1114]: Ignoring "noauto" option for root device
	[  +0.098656] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.058515] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.064719] systemd-fstab-generator[1140]: Ignoring "noauto" option for root device
	[  +2.463494] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.126800] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.101663] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.133711] systemd-fstab-generator[1394]: Ignoring "noauto" option for root device
	[  +0.457617] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[  +6.844240] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.300680] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 6 19:06] kauditd_printk_skb: 83 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"warn","ts":"2024-09-06T19:04:56.004501Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:04:56.510489Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:04:56.955363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982261Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000937137s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-06T19:04:56.982656Z","caller":"traceutil/trace.go:171","msg":"trace[219101750] range","detail":"{range_begin:; range_end:; }","duration":"7.001140659s","start":"2024-09-06T19:04:49.981500Z","end":"2024-09-06T19:04:56.982641Z","steps":["trace[219101750] 'agreement among raft nodes before linearized reading'  (duration: 7.000934405s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:04:56.982940Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:04:58.256456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839480Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839529Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842271Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-06T19:04:59.555087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	
	
	==> etcd [8bdc400b3db6] <==
	{"level":"info","ts":"2024-09-06T19:06:32.447583Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.448798Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"6a6e0aa498652645","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:06:32.448838Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-06T19:09:34.382036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 switched to configuration voters=(7165863487987372753 7669078917506213445 7907831940721422322) learners=(17537987181276551891)"}
	{"level":"info","ts":"2024-09-06T19:09:34.382516Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6f1c753fc4a3cb","local-member-id":"6dbe4340aa302ff2","added-peer-id":"f363726bcf57dad3","added-peer-peer-urls":["https://192.169.0.28:2380"]}
	{"level":"info","ts":"2024-09-06T19:09:34.382681Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.382753Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.383131Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.383327Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.384392Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.384608Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3","remote-peer-urls":["https://192.169.0.28:2380"]}
	{"level":"info","ts":"2024-09-06T19:09:34.384885Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.385070Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"6dbe4340aa302ff2","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.385746Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.808975Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.809178Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.817626Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.858169Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"f363726bcf57dad3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-06T19:09:35.858364Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.870254Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"f363726bcf57dad3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:09:35.870298Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:36.944608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 switched to configuration voters=(7165863487987372753 7669078917506213445 7907831940721422322 17537987181276551891)"}
	{"level":"info","ts":"2024-09-06T19:09:36.944792Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e6f1c753fc4a3cb","local-member-id":"6dbe4340aa302ff2"}
	
	
	==> kernel <==
	 19:10:07 up 5 min,  0 users,  load average: 0.24, 0.24, 0.11
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c86abdd0a1a3] <==
	I0906 19:09:43.506789       1 main.go:299] handling current node
	I0906 19:09:43.506800       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:09:43.506805       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:09:43.506939       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:09:43.507012       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:09:53.503008       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:09:53.503569       1 main.go:299] handling current node
	I0906 19:09:53.503905       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:09:53.504067       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:09:53.504245       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:09:53.504379       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:09:53.504554       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:09:53.504700       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:09:53.504831       1 main.go:295] Handling node with IPs: map[192.169.0.28:{}]
	I0906 19:09:53.504946       1 main.go:322] Node ha-343000-m05 has CIDR [10.244.4.0/24] 
	I0906 19:10:03.506756       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:10:03.506987       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:10:03.507271       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:10:03.507377       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:10:03.507512       1 main.go:295] Handling node with IPs: map[192.169.0.28:{}]
	I0906 19:10:03.507634       1 main.go:322] Node ha-343000-m05 has CIDR [10.244.4.0/24] 
	I0906 19:10:03.507759       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:10:03.507874       1 main.go:299] handling current node
	I0906 19:10:03.507924       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:10:03.508028       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [592c214e97d5] <==
	I0906 19:05:27.461896       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:05:27.465176       1 server.go:142] Version: v1.31.0
	I0906 19:05:27.465213       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.107777       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:05:28.107810       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:05:28.107883       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:05:28.108002       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:05:28.108375       1 instance.go:232] Using reconciler: lease
	W0906 19:05:48.100071       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:05:48.101622       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0906 19:05:48.109302       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9ca63a507d33] <==
	I0906 19:06:00.319954       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0906 19:06:00.329227       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0906 19:06:00.389615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:06:00.399153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:06:00.399318       1 policy_source.go:224] refreshing policies
	I0906 19:06:00.418950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:06:00.418975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:06:00.419196       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:06:00.421841       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:06:00.423174       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:06:00.423547       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:06:00.423580       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:06:00.423586       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:06:00.423589       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:06:00.423592       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:06:00.424202       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:06:00.424372       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:06:00.429383       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0906 19:06:00.444807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.25]
	I0906 19:06:00.446706       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:06:00.460452       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0906 19:06:00.463465       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0906 19:06:00.488387       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:06:01.327320       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:06:01.574034       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.24 192.169.0.25]
	
	
	==> kube-controller-manager [5cc4eed8c219] <==
	I0906 19:05:28.174269       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:05:28.573887       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:05:28.573928       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.585160       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:05:28.585380       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:05:28.585888       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:05:28.586027       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:05:49.113760       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-controller-manager [890baa8f92fc] <==
	I0906 19:09:34.213167       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-343000-m05" podCIDRs=["10.244.4.0/24"]
	I0906 19:09:34.213624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:34.213692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:34.235668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:34.288789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	E0906 19:09:34.364650       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"d4bfe8d6-d130-47f9-a49c-d1349255746b\", ResourceVersion:\"2070\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 6, 18, 55, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b9cea0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\
", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002961e80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024737d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolum
eSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVo
lumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024737e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtua
lDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b9cee0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\
"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0027e7200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002a272c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00295d580), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Host
Alias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002a63450)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a27320)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:3, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfille
d on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0906 19:09:37.402652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.042560       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.052836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.110172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.160559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.229041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.608683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.609244       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-343000-m05"
	I0906 19:09:38.701224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:42.231946       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:09:42.242032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:42.324133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:09:44.477813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:48.122052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:52.422627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:56.045182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:56.072455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:57.266707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:10:05.182527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	
	
	==> kube-proxy [803c4f073a4f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:06:13.148913       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:06:13.172780       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 19:06:13.173030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:06:13.214090       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:06:13.214133       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:06:13.214154       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:06:13.217530       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:06:13.218331       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:06:13.218361       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:06:13.222797       1 config.go:197] "Starting service config controller"
	I0906 19:06:13.222930       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:06:13.223035       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:06:13.223104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:06:13.225748       1 config.go:326] "Starting node config controller"
	I0906 19:06:13.225874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:06:13.323124       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:06:13.324280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:06:13.326187       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4d2f47c39f16] <==
	W0906 19:05:58.391628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.391680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.574460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.574508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.613456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.613730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	I0906 19:06:06.337934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:09:34.254444       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hbcb4\": pod kindnet-hbcb4 is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-hbcb4" node="ha-343000-m05"
	E0906 19:09:34.254918       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hbcb4\": pod kindnet-hbcb4 is already assigned to node \"ha-343000-m05\"" pod="kube-system/kindnet-hbcb4"
	E0906 19:09:34.255088       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdxss\": pod kube-proxy-sdxss is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdxss" node="ha-343000-m05"
	E0906 19:09:34.255340       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdxss\": pod kube-proxy-sdxss is already assigned to node \"ha-343000-m05\"" pod="kube-system/kube-proxy-sdxss"
	E0906 19:09:34.273658       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wtphc\": pod kube-proxy-wtphc is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wtphc" node="ha-343000-m05"
	E0906 19:09:34.273753       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wtphc\": pod kube-proxy-wtphc is already assigned to node \"ha-343000-m05\"" pod="kube-system/kube-proxy-wtphc"
	E0906 19:09:34.289514       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f4mts\": pod kindnet-f4mts is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-f4mts" node="ha-343000-m05"
	E0906 19:09:34.289982       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bcac08d6-b75f-4fbb-a399-2c77e9b2e57d(kube-system/kindnet-f4mts) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f4mts"
	E0906 19:09:34.290020       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f4mts\": pod kindnet-f4mts is already assigned to node \"ha-343000-m05\"" pod="kube-system/kindnet-f4mts"
	I0906 19:09:34.290350       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f4mts" node="ha-343000-m05"
	E0906 19:09:34.317789       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sbspk\": pod kindnet-sbspk is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-sbspk" node="ha-343000-m05"
	E0906 19:09:34.317849       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2c2e6dc3-8631-4375-8f25-517f8c32c39b(kube-system/kindnet-sbspk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sbspk"
	E0906 19:09:34.317863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sbspk\": pod kindnet-sbspk is already assigned to node \"ha-343000-m05\"" pod="kube-system/kindnet-sbspk"
	I0906 19:09:34.317875       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sbspk" node="ha-343000-m05"
	E0906 19:09:34.346836       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7xrbs\": pod kube-proxy-7xrbs is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7xrbs" node="ha-343000-m05"
	E0906 19:09:34.346894       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1de2379e-e9ef-4da3-915f-dc0986a6129e(kube-system/kube-proxy-7xrbs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7xrbs"
	E0906 19:09:34.346911       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7xrbs\": pod kube-proxy-7xrbs is already assigned to node \"ha-343000-m05\"" pod="kube-system/kube-proxy-7xrbs"
	I0906 19:09:34.346924       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7xrbs" node="ha-343000-m05"
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	W0906 19:04:31.417232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.417325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:31.755428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.755742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:35.986154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:35.986279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.066579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.066654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.563029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.563228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.748870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.749078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:45.521553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:45.521675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:47.041120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:47.041443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:52.540182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:52.540432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:54.069445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:54.069585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	E0906 19:04:59.711524       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0906 19:04:59.712006       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0906 19:04:59.712120       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:04:59.712142       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0906 19:04:59.712922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:06:20 ha-343000 kubelet[1561]: E0906 19:06:20.331039    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:06:20 ha-343000 kubelet[1561]: I0906 19:06:20.393885    1561 scope.go:117] "RemoveContainer" containerID="b3713b7090d8f8af511e66546413a97f331dea488be8efe378a26980838f7cf4"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211095    1561 scope.go:117] "RemoveContainer" containerID="051e748db656a81282f4811bb15ed42555514a115306dfa611e2c0d2af72e345"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211309    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: E0906 19:06:43.211390    1561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9815f44c-20e3-4243-8eb4-60cd42a850ad)\"" pod="kube-system/storage-provisioner" podUID="9815f44c-20e3-4243-8eb4-60cd42a850ad"
	Sep 06 19:06:57 ha-343000 kubelet[1561]: I0906 19:06:57.289715    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:07:20 ha-343000 kubelet[1561]: E0906 19:07:20.331091    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:08:20 ha-343000 kubelet[1561]: E0906 19:08:20.333049    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:09:20 ha-343000 kubelet[1561]: E0906 19:09:20.331561    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:09:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:09:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:09:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:09:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-343000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (83.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-343000" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-343000\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-343000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":
false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-343000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.24\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"Control
Plane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.25\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.26\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.27\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.169.0.28\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"i
stio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/U
sers:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-343000 -n ha-343000
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 logs -n 25: (3.471958085s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m04 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp testdata/cp-test.txt                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000 sudo cat                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m02 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | ha-343000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-343000 ssh -n ha-343000-m03 sudo cat                                                                                      | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:58 PDT |
	|         | /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-343000 node stop m02 -v=7                                                                                                 | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:58 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-343000 node start m02 -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 11:59 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000 -v=7                                                                                                       | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-343000 -v=7                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 11:59 PDT | 06 Sep 24 12:00 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true -v=7                                                                                                | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:00 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-343000                                                                                                            | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	| node    | ha-343000 node delete m03 -v=7                                                                                               | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-343000 stop -v=7                                                                                                          | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:02 PDT | 06 Sep 24 12:05 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-343000 --wait=true                                                                                                     | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:05 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-343000                                                                                                             | ha-343000 | jenkins | v1.34.0 | 06 Sep 24 12:08 PDT | 06 Sep 24 12:10 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:05:01
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:05:01.821113   12253 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:05:01.821396   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821403   12253 out.go:358] Setting ErrFile to fd 2...
	I0906 12:05:01.821407   12253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:05:01.821585   12253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:05:01.822962   12253 out.go:352] Setting JSON to false
	I0906 12:05:01.845482   12253 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11072,"bootTime":1725638429,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:05:01.845567   12253 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:05:01.867344   12253 out.go:177] * [ha-343000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:05:01.909192   12253 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:05:01.909251   12253 notify.go:220] Checking for updates...
	I0906 12:05:01.951681   12253 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:01.972896   12253 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:05:01.993997   12253 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:05:02.014915   12253 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:05:02.036376   12253 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:05:02.058842   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:02.059362   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.059426   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.069603   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56303
	I0906 12:05:02.069962   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.070394   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.070407   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.070602   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.070721   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.070905   12253 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:05:02.071152   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.071173   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.079785   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56305
	I0906 12:05:02.080100   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.080480   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.080508   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.080753   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.080876   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.109151   12253 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:05:02.151203   12253 start.go:297] selected driver: hyperkit
	I0906 12:05:02.151225   12253 start.go:901] validating driver "hyperkit" against &{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:d
efault APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gv
isor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.151398   12253 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:05:02.151526   12253 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.151681   12253 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:05:02.160708   12253 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:05:02.164397   12253 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.164417   12253 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:05:02.167034   12253 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:05:02.167076   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:02.167082   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:02.167157   12253 start.go:340] cluster config:
	{Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:02.167283   12253 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:05:02.209167   12253 out.go:177] * Starting "ha-343000" primary control-plane node in "ha-343000" cluster
	I0906 12:05:02.230210   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:02.230284   12253 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:05:02.230304   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:02.230523   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:02.230539   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:02.230657   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.231246   12253 start.go:360] acquireMachinesLock for ha-343000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:02.231321   12253 start.go:364] duration metric: took 58.855µs to acquireMachinesLock for "ha-343000"
	I0906 12:05:02.231338   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:02.231348   12253 fix.go:54] fixHost starting: 
	I0906 12:05:02.231579   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:02.231602   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:02.240199   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56307
	I0906 12:05:02.240538   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:02.240898   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:02.240906   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:02.241115   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:02.241241   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.241344   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:02.241429   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.241509   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12107
	I0906 12:05:02.242441   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid 12107 missing from process table
	I0906 12:05:02.242473   12253 fix.go:112] recreateIfNeeded on ha-343000: state=Stopped err=<nil>
	I0906 12:05:02.242488   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	W0906 12:05:02.242570   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:02.285299   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000" ...
	I0906 12:05:02.308252   12253 main.go:141] libmachine: (ha-343000) Calling .Start
	I0906 12:05:02.308536   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.308568   12253 main.go:141] libmachine: (ha-343000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid
	I0906 12:05:02.308690   12253 main.go:141] libmachine: (ha-343000) DBG | Using UUID 36fe57fe-68ea-47a6-a226-e026237c9096
	I0906 12:05:02.418778   12253 main.go:141] libmachine: (ha-343000) DBG | Generated MAC e:ef:97:91:be:81
	I0906 12:05:02.418805   12253 main.go:141] libmachine: (ha-343000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:02.418989   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419036   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36fe57fe-68ea-47a6-a226-e026237c9096", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000299650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:02.419095   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36fe57fe-68ea-47a6-a226-e026237c9096", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:02.419142   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36fe57fe-68ea-47a6-a226-e026237c9096 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/ha-343000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:02.419160   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:02.420829   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 DEBUG: hyperkit: Pid is 12266
	I0906 12:05:02.421178   12253 main.go:141] libmachine: (ha-343000) DBG | Attempt 0
	I0906 12:05:02.421194   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:02.421256   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:02.422249   12253 main.go:141] libmachine: (ha-343000) DBG | Searching for e:ef:97:91:be:81 in /var/db/dhcpd_leases ...
	I0906 12:05:02.422316   12253 main.go:141] libmachine: (ha-343000) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:02.422340   12253 main.go:141] libmachine: (ha-343000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66db525c}
	I0906 12:05:02.422356   12253 main.go:141] libmachine: (ha-343000) DBG | Found match: e:ef:97:91:be:81
	I0906 12:05:02.422371   12253 main.go:141] libmachine: (ha-343000) DBG | IP: 192.169.0.24
	I0906 12:05:02.422430   12253 main.go:141] libmachine: (ha-343000) Calling .GetConfigRaw
	I0906 12:05:02.423159   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:02.423357   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:02.423787   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:02.423798   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:02.423945   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:02.424057   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:02.424240   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:02.424491   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:02.424632   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:02.424882   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:02.424892   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:02.428574   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:02.479264   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:02.479938   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.479953   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.479971   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.479984   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.867700   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:02.867715   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:02.983045   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:02.983079   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:02.983090   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:02.983110   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:02.983957   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:02.983967   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:08.596032   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:05:08.596072   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:05:08.596081   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:05:08.620302   12253 main.go:141] libmachine: (ha-343000) DBG | 2024/09/06 12:05:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:05:13.496727   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:13.496743   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.496887   12253 buildroot.go:166] provisioning hostname "ha-343000"
	I0906 12:05:13.496898   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.497005   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.497091   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.497190   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497290   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.497391   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.497515   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.497658   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.497666   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000 && echo "ha-343000" | sudo tee /etc/hostname
	I0906 12:05:13.573506   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000
	
	I0906 12:05:13.573525   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.573649   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.573744   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573841   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.573933   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.574054   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.574199   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.574210   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:13.646449   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:13.646474   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:13.646492   12253 buildroot.go:174] setting up certificates
	I0906 12:05:13.646500   12253 provision.go:84] configureAuth start
	I0906 12:05:13.646506   12253 main.go:141] libmachine: (ha-343000) Calling .GetMachineName
	I0906 12:05:13.646647   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:13.646742   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.646835   12253 provision.go:143] copyHostCerts
	I0906 12:05:13.646872   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.646964   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:13.646972   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:13.647092   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:13.647297   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647337   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:13.647342   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:13.647419   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:13.647566   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647604   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:13.647609   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:13.647688   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:13.647833   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000 san=[127.0.0.1 192.169.0.24 ha-343000 localhost minikube]
	I0906 12:05:13.694032   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:13.694082   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:13.694097   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.694208   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.694294   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.694394   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.694509   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:13.734054   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:13.734119   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:13.754153   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:13.754219   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 12:05:13.773776   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:13.773840   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 12:05:13.793258   12253 provision.go:87] duration metric: took 146.744964ms to configureAuth
	I0906 12:05:13.793272   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:13.793440   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:13.793455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:13.793596   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.793699   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.793786   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793872   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.793955   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.794076   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.794207   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.794215   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:13.860967   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:13.860981   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:13.861068   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:13.861082   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.861205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.861297   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861411   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.861521   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.861683   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.861822   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.861868   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:13.937805   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:13.937827   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:13.937964   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:13.938080   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938205   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:13.938295   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:13.938419   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:13.938558   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:13.938571   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:15.619728   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:15.619742   12253 machine.go:96] duration metric: took 13.195921245s to provisionDockerMachine
	I0906 12:05:15.619754   12253 start.go:293] postStartSetup for "ha-343000" (driver="hyperkit")
	I0906 12:05:15.619762   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:15.619772   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.619950   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:15.619966   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.620058   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.620154   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.620257   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.620337   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.660028   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:15.663309   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:15.663323   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:15.663418   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:15.663631   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:15.663638   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:15.663848   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:15.671393   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:15.691128   12253 start.go:296] duration metric: took 71.364923ms for postStartSetup
	I0906 12:05:15.691156   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.691327   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:15.691341   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.691453   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.691544   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.691628   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.691712   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.732095   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:15.732157   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:15.785220   12253 fix.go:56] duration metric: took 13.553838389s for fixHost
	I0906 12:05:15.785242   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.785373   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.785462   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785558   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.785650   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.785774   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:15.785926   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0906 12:05:15.785933   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:15.851168   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649515.950195219
	
	I0906 12:05:15.851179   12253 fix.go:216] guest clock: 1725649515.950195219
	I0906 12:05:15.851184   12253 fix.go:229] Guest: 2024-09-06 12:05:15.950195219 -0700 PDT Remote: 2024-09-06 12:05:15.785232 -0700 PDT m=+13.999000936 (delta=164.963219ms)
	I0906 12:05:15.851205   12253 fix.go:200] guest clock delta is within tolerance: 164.963219ms
	I0906 12:05:15.851209   12253 start.go:83] releasing machines lock for "ha-343000", held for 13.619855055s
	I0906 12:05:15.851228   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851359   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:15.851455   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851761   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851860   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:15.851943   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:15.851974   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852006   12253 ssh_runner.go:195] Run: cat /version.json
	I0906 12:05:15.852029   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:15.852070   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852126   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:15.852163   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852217   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:15.852273   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852292   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:15.852391   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.852414   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:15.945582   12253 ssh_runner.go:195] Run: systemctl --version
	I0906 12:05:15.950518   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 12:05:15.954710   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:15.954750   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:15.972724   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:15.972739   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:15.972842   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:15.997626   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:16.009969   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:16.021002   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.021063   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:16.029939   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.039024   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:16.047772   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:16.056625   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:16.065543   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:16.074247   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:16.082976   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:16.091738   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:16.099691   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:16.107701   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.207522   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:16.227285   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:16.227363   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:16.242536   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.255682   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:16.272770   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:16.283410   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.293777   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:16.316221   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:16.326357   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:16.341265   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:16.344224   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:16.351341   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:16.364686   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:16.462680   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:16.567102   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:16.567167   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:16.581141   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:16.682906   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:19.018795   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.33586105s)
	I0906 12:05:19.018863   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:19.029907   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:19.042839   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.053183   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:19.161103   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:19.269627   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.376110   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:19.389292   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:19.400498   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.508773   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:19.574293   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:19.574369   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:19.578648   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:19.578702   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:19.581725   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:19.611289   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:19.611360   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.628755   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:19.690349   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:19.690435   12253 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 12:05:19.690798   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:19.695532   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.705484   12253 kubeadm.go:883] updating cluster {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAV
IP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp
:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:05:19.705569   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:19.705619   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.718680   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.718691   12253 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:05:19.718764   12253 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:05:19.731988   12253 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:05:19.732008   12253 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:05:19.732017   12253 kubeadm.go:934] updating node { 192.169.0.24 8443 v1.31.0 docker true true} ...
	I0906 12:05:19.732095   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:19.732160   12253 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:05:19.769790   12253 cni.go:84] Creating CNI manager for ""
	I0906 12:05:19.769810   12253 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 12:05:19.769820   12253 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:05:19.769836   12253 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.24 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-343000 NodeName:ha-343000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:05:19.769924   12253 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-343000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:05:19.769938   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:19.769993   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:19.783021   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:19.783091   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:19.783139   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:19.790731   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:19.790780   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 12:05:19.798087   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 12:05:19.811294   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:19.826571   12253 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0906 12:05:19.840214   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:19.853805   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:19.856803   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:19.866597   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:19.969582   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:19.984116   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.24
	I0906 12:05:19.984128   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:19.984139   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:19.984324   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:19.984402   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:19.984413   12253 certs.go:256] generating profile certs ...
	I0906 12:05:19.984529   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:19.984611   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.76438f57
	I0906 12:05:19.984683   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:19.984690   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:19.984715   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:19.984733   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:19.984750   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:19.984767   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:19.984795   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:19.984823   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:19.984846   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:19.984950   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:19.984995   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:19.985004   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:19.985045   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:19.985074   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:19.985102   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:19.985164   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:19.985201   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:19.985223   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:19.985241   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:19.985738   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:20.016977   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:20.040002   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:20.074896   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:20.096785   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:20.117992   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:20.152101   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:20.181980   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:20.249104   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:20.310747   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:20.334377   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:20.354759   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:05:20.368573   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:20.372727   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:20.381943   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385218   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.385254   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:20.389369   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:20.398370   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:20.407468   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410735   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.410769   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:20.414896   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:20.423953   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:20.432893   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436127   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.436161   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:20.440280   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:20.449469   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:20.452834   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:20.457085   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:20.461715   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:20.466070   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:20.470282   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:20.474449   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:20.478690   12253 kubeadm.go:392] StartCluster: {Name:ha-343000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:
192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.27 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fa
lse helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:05:20.478796   12253 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:05:20.491888   12253 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:05:20.500336   12253 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:05:20.500348   12253 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:05:20.500388   12253 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:05:20.508605   12253 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:05:20.508923   12253 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-343000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.509004   12253 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "ha-343000" cluster setting kubeconfig missing "ha-343000" context setting]
	I0906 12:05:20.509222   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.509871   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.510072   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:05:20.510389   12253 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:05:20.510569   12253 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:05:20.518433   12253 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.24
	I0906 12:05:20.518445   12253 kubeadm.go:597] duration metric: took 18.093623ms to restartPrimaryControlPlane
	I0906 12:05:20.518450   12253 kubeadm.go:394] duration metric: took 39.76917ms to StartCluster
	I0906 12:05:20.518463   12253 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.518535   12253 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:20.518965   12253 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:20.519194   12253 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.24 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:20.519207   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:05:20.519217   12253 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:05:20.519329   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.562952   12253 out.go:177] * Enabled addons: 
	I0906 12:05:20.584902   12253 addons.go:510] duration metric: took 65.689522ms for enable addons: enabled=[]
	I0906 12:05:20.584940   12253 start.go:246] waiting for cluster config update ...
	I0906 12:05:20.584973   12253 start.go:255] writing updated cluster config ...
	I0906 12:05:20.608171   12253 out.go:201] 
	I0906 12:05:20.630349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:20.630488   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.652951   12253 out.go:177] * Starting "ha-343000-m02" control-plane node in "ha-343000" cluster
	I0906 12:05:20.695164   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:05:20.695203   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:05:20.695405   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:05:20.695421   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:05:20.695517   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.696367   12253 start.go:360] acquireMachinesLock for ha-343000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:05:20.696454   12253 start.go:364] duration metric: took 67.794µs to acquireMachinesLock for "ha-343000-m02"
	I0906 12:05:20.696472   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:05:20.696479   12253 fix.go:54] fixHost starting: m02
	I0906 12:05:20.696771   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:20.696805   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:20.705845   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56329
	I0906 12:05:20.706183   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:20.706528   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:20.706543   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:20.706761   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:20.706875   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.706980   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 12:05:20.707064   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.707136   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12118
	I0906 12:05:20.708055   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.708088   12253 fix.go:112] recreateIfNeeded on ha-343000-m02: state=Stopped err=<nil>
	I0906 12:05:20.708098   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	W0906 12:05:20.708185   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:05:20.734735   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m02" ...
	I0906 12:05:20.776747   12253 main.go:141] libmachine: (ha-343000-m02) Calling .Start
	I0906 12:05:20.777073   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.777115   12253 main.go:141] libmachine: (ha-343000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid
	I0906 12:05:20.778701   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 12118 missing from process table
	I0906 12:05:20.778717   12253 main.go:141] libmachine: (ha-343000-m02) DBG | pid 12118 is in state "Stopped"
	I0906 12:05:20.778778   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid...
	I0906 12:05:20.779095   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Using UUID 2c74355e-3595-46f3-9664-55c72aec5dea
	I0906 12:05:20.806950   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Generated MAC a2:d5:dd:3d:e9:56
	I0906 12:05:20.806972   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:05:20.807155   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807233   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2c74355e-3595-46f3-9664-55c72aec5dea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037d500)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:05:20.807304   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2c74355e-3595-46f3-9664-55c72aec5dea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:05:20.807361   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2c74355e-3595-46f3-9664-55c72aec5dea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/ha-343000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:05:20.807374   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:05:20.808851   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 DEBUG: hyperkit: Pid is 12276
	I0906 12:05:20.809435   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Attempt 0
	I0906 12:05:20.809451   12253 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:20.809514   12253 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 12276
	I0906 12:05:20.811081   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Searching for a2:d5:dd:3d:e9:56 in /var/db/dhcpd_leases ...
	I0906 12:05:20.811162   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:05:20.811181   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:05:20.811209   12253 main.go:141] libmachine: (ha-343000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca2f2}
	I0906 12:05:20.811220   12253 main.go:141] libmachine: (ha-343000-m02) DBG | Found match: a2:d5:dd:3d:e9:56
	I0906 12:05:20.811238   12253 main.go:141] libmachine: (ha-343000-m02) DBG | IP: 192.169.0.25
	I0906 12:05:20.811245   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetConfigRaw
	I0906 12:05:20.811904   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:20.812111   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:05:20.812569   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:05:20.812582   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:20.812711   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:20.812849   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:20.812941   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:20.813131   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:20.813262   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:20.813401   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:20.813411   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:05:20.817160   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:05:20.825311   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:05:20.826263   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:20.826278   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:20.826305   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:20.826316   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.214947   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:05:21.214961   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:05:21.329668   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:05:21.329695   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:05:21.329711   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:05:21.329721   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:05:21.330549   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:05:21.330560   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:05:26.960134   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0906 12:05:26.960175   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0906 12:05:26.960183   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0906 12:05:26.984271   12253 main.go:141] libmachine: (ha-343000-m02) DBG | 2024/09/06 12:05:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0906 12:05:30.128139   12253 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.25:22: connect: connection refused
	I0906 12:05:33.191918   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:05:33.191932   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192104   12253 buildroot.go:166] provisioning hostname "ha-343000-m02"
	I0906 12:05:33.192113   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.192203   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.192293   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.192374   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192456   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.192573   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.192685   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.192834   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.192848   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m02 && echo "ha-343000-m02" | sudo tee /etc/hostname
	I0906 12:05:33.271080   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m02
	
	I0906 12:05:33.271107   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.271242   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.271343   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271432   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.271517   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.271653   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.271816   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.271828   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:05:33.340749   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:05:33.340766   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:05:33.340776   12253 buildroot.go:174] setting up certificates
	I0906 12:05:33.340781   12253 provision.go:84] configureAuth start
	I0906 12:05:33.340788   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetMachineName
	I0906 12:05:33.340917   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:33.341015   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.341102   12253 provision.go:143] copyHostCerts
	I0906 12:05:33.341127   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341183   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:05:33.341189   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:05:33.341303   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:05:33.341481   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341516   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:05:33.341521   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:05:33.341626   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:05:33.341793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341824   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:05:33.341829   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:05:33.341902   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:05:33.342105   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m02 san=[127.0.0.1 192.169.0.25 ha-343000-m02 localhost minikube]
	I0906 12:05:33.430053   12253 provision.go:177] copyRemoteCerts
	I0906 12:05:33.430099   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:05:33.430112   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.430247   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.430337   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.430424   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.430498   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:33.468786   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:05:33.468854   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:05:33.488429   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:05:33.488502   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:05:33.507788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:05:33.507853   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:05:33.527149   12253 provision.go:87] duration metric: took 186.359429ms to configureAuth
	I0906 12:05:33.527164   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:05:33.527349   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:33.527363   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:33.527493   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.527581   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.527670   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527752   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.527834   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.527941   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.528081   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.528089   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:05:33.592983   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:05:33.592995   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:05:33.593066   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:05:33.593077   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.593197   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.593303   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593392   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.593487   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.593630   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.593775   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.593821   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:05:33.669226   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:05:33.669253   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:33.669404   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:33.669513   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669628   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:33.669726   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:33.669876   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:33.670026   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:33.670038   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:05:35.327313   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:05:35.327328   12253 machine.go:96] duration metric: took 14.51472045s to provisionDockerMachine
	I0906 12:05:35.327335   12253 start.go:293] postStartSetup for "ha-343000-m02" (driver="hyperkit")
	I0906 12:05:35.327345   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:05:35.327357   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.327550   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:05:35.327564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.327658   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.327737   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.327824   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.327895   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.374953   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:05:35.380104   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:05:35.380118   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:05:35.380209   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:05:35.380346   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:05:35.380353   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:05:35.380535   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:05:35.392904   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:35.425316   12253 start.go:296] duration metric: took 97.970334ms for postStartSetup
	I0906 12:05:35.425336   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.425510   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:05:35.425521   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.425611   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.425700   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.425784   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.425866   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.465210   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:05:35.465270   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:05:35.519276   12253 fix.go:56] duration metric: took 14.822763667s for fixHost
	I0906 12:05:35.519322   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.519466   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.519564   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519682   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.519766   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.519897   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:05:35.520049   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.25 22 <nil> <nil>}
	I0906 12:05:35.520058   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:05:35.586671   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649535.517793561
	
	I0906 12:05:35.586682   12253 fix.go:216] guest clock: 1725649535.517793561
	I0906 12:05:35.586690   12253 fix.go:229] Guest: 2024-09-06 12:05:35.517793561 -0700 PDT Remote: 2024-09-06 12:05:35.519294 -0700 PDT m=+33.733024449 (delta=-1.500439ms)
	I0906 12:05:35.586700   12253 fix.go:200] guest clock delta is within tolerance: -1.500439ms
	I0906 12:05:35.586703   12253 start.go:83] releasing machines lock for "ha-343000-m02", held for 14.890212868s
	I0906 12:05:35.586719   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.586869   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:35.609959   12253 out.go:177] * Found network options:
	I0906 12:05:35.631361   12253 out.go:177]   - NO_PROXY=192.169.0.24
	W0906 12:05:35.652026   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.652053   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652675   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652820   12253 main.go:141] libmachine: (ha-343000-m02) Calling .DriverName
	I0906 12:05:35.652904   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:05:35.652927   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	W0906 12:05:35.652986   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:05:35.653055   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:05:35.653068   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHHostname
	I0906 12:05:35.653078   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653249   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHPort
	I0906 12:05:35.653283   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653371   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHKeyPath
	I0906 12:05:35.653405   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653519   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetSSHUsername
	I0906 12:05:35.653550   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	I0906 12:05:35.653617   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m02/id_rsa Username:docker}
	W0906 12:05:35.689663   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:05:35.689725   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:05:35.741169   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:05:35.741183   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.741249   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:35.756280   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:05:35.765285   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:05:35.774250   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:05:35.774298   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:05:35.783141   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.792103   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:05:35.800998   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:05:35.809931   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:05:35.818930   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:05:35.828100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:05:35.837011   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:05:35.846071   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:05:35.854051   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:05:35.862225   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:35.953449   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:05:35.973036   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:05:35.973102   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:05:35.989701   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.002119   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:05:36.020969   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:05:36.032323   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.043370   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:05:36.064919   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:05:36.076134   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:05:36.091185   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:05:36.094041   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:05:36.101975   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:05:36.115524   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:05:36.210477   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:05:36.307446   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:05:36.307474   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:05:36.321506   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:36.425142   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:05:38.743512   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.31834803s)
	I0906 12:05:38.743573   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:05:38.754689   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:05:38.767595   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:38.778550   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:05:38.871803   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:05:38.967444   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.077912   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:05:39.091499   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:05:39.102647   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:39.199868   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:05:39.269396   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:05:39.269473   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:05:39.274126   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:05:39.274176   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:05:39.279526   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:05:39.307628   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:05:39.307702   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.324272   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:05:39.363496   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:05:39.384323   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:05:39.405031   12253 main.go:141] libmachine: (ha-343000-m02) Calling .GetIP
	I0906 12:05:39.405472   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:05:39.410152   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:39.420507   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:05:39.420684   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:39.420907   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.420932   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.430101   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56352
	I0906 12:05:39.430438   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.430796   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.430812   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.431028   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.431139   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:05:39.431212   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:05:39.431285   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:05:39.432244   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:05:39.432496   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:05:39.432518   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:05:39.441251   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56354
	I0906 12:05:39.441578   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:05:39.441903   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:05:39.441918   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:05:39.442138   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:05:39.442248   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:05:39.442348   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.25
	I0906 12:05:39.442355   12253 certs.go:194] generating shared ca certs ...
	I0906 12:05:39.442365   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:05:39.442516   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:05:39.442578   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:05:39.442588   12253 certs.go:256] generating profile certs ...
	I0906 12:05:39.442681   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:05:39.442772   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.7390dc12
	I0906 12:05:39.442830   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:05:39.442838   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:05:39.442859   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:05:39.442879   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:05:39.442896   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:05:39.442915   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:05:39.442951   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:05:39.442970   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:05:39.442987   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:05:39.443067   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:05:39.443106   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:05:39.443114   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:05:39.443147   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:05:39.443183   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:05:39.443212   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:05:39.443276   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:05:39.443310   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.443336   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.443355   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.443381   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:05:39.443473   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:05:39.443566   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:05:39.443662   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:05:39.443742   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:05:39.474601   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:05:39.477773   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:05:39.486087   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:05:39.489291   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:05:39.497797   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:05:39.500976   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:05:39.508902   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:05:39.512097   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:05:39.522208   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:05:39.529029   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:05:39.538558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:05:39.541788   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:05:39.551255   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:05:39.571163   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:05:39.590818   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:05:39.610099   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:05:39.629618   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:05:39.649203   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:05:39.668940   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:05:39.688319   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:05:39.707568   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:05:39.727593   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:05:39.746946   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:05:39.766191   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:05:39.779761   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:05:39.793389   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:05:39.807028   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:05:39.820798   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:05:39.834428   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:05:39.848169   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:05:39.861939   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:05:39.866268   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:05:39.875520   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878895   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.878936   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:05:39.883242   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:05:39.892394   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:05:39.901475   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904880   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.904919   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:05:39.909164   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:05:39.918366   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:05:39.927561   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.930968   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.931005   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:05:39.935325   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:05:39.944442   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:05:39.947919   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:05:39.952225   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:05:39.956510   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:05:39.960794   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:05:39.965188   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:05:39.969546   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:05:39.973805   12253 kubeadm.go:934] updating node {m02 192.169.0.25 8443 v1.31.0 docker true true} ...
	I0906 12:05:39.973869   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:05:39.973885   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:05:39.973920   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:05:39.987092   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:05:39.987133   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:05:39.987182   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:05:39.995535   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:05:39.995584   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:05:40.003762   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:05:40.017266   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:05:40.030719   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:05:40.044348   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:05:40.047310   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:05:40.057546   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.156340   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.171403   12253 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.25 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:05:40.171578   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:05:40.192574   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:05:40.213457   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:05:40.344499   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:05:40.359579   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:05:40.359776   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:05:40.359813   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:05:40.359973   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:05:40.360058   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:40.360063   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:40.360071   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:40.360075   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:47.989850   12253 round_trippers.go:574] Response Status:  in 7629 milliseconds
	I0906 12:05:48.990862   12253 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:48.990895   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:48.990902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:48.990922   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:49.992764   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:49.992860   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.1:56357->192.169.0.24:8443: read: connection reset by peer
	I0906 12:05:49.992914   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:49.992923   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:49.992931   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:49.992938   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:50.992884   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:50.992985   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:50.992993   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:50.993001   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:50.993007   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:51.994156   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:51.994218   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:51.994272   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:51.994282   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:51.994293   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:51.994300   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:52.994610   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:52.994678   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:52.994684   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:52.994690   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:52.994695   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:53.996452   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:53.996513   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:53.996568   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:53.996577   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:53.996587   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:53.996600   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:54.996281   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:54.996431   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:54.996445   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:54.996456   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:54.996470   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997732   12253 round_trippers.go:574] Response Status:  in 1001 milliseconds
	I0906 12:05:55.997791   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:55.997834   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:55.997841   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:55.997848   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:55.997855   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998659   12253 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0906 12:05:56.998737   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:56.998743   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:56.998748   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:56.998753   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:05:57.998704   12253 round_trippers.go:574] Response Status:  in 999 milliseconds
	I0906 12:05:57.998768   12253 node_ready.go:53] error getting node "ha-343000-m02": Get "https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02": dial tcp 192.169.0.24:8443: connect: connection refused
	I0906 12:05:57.998824   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:05:57.998830   12253 round_trippers.go:469] Request Headers:
	I0906 12:05:57.998841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:05:57.998847   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.234879   12253 round_trippers.go:574] Response Status: 200 OK in 2236 milliseconds
	I0906 12:06:00.235584   12253 node_ready.go:49] node "ha-343000-m02" has status "Ready":"True"
	I0906 12:06:00.235597   12253 node_ready.go:38] duration metric: took 19.875567395s for node "ha-343000-m02" to be "Ready" ...
	I0906 12:06:00.235604   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:00.235643   12253 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:06:00.235653   12253 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:06:00.235696   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:00.235701   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.235707   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.235711   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.262088   12253 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0906 12:06:00.268356   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.268408   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:00.268414   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.268421   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.268427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.271139   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.271625   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.271633   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.271638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.271642   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.273753   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:00.274136   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.274144   12253 pod_ready.go:82] duration metric: took 5.774893ms for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274150   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.274179   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:00.274184   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.274189   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.274192   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.275924   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.276344   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.276351   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.276355   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.276360   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278001   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.278322   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.278329   12253 pod_ready.go:82] duration metric: took 4.174121ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278335   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.278363   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:00.278368   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.278373   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.278379   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.280145   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.280523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:00.280530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.280535   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.280540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.282107   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.282477   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:00.282486   12253 pod_ready.go:82] duration metric: took 4.146745ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282492   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:00.282522   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.282528   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.282534   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.282537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.284223   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.284663   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.284670   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.284676   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.284679   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.286441   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:00.782726   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:00.782751   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.782796   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.782807   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.786175   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:00.786692   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:00.786700   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:00.786706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:00.786710   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:00.788874   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.283655   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.283671   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.283678   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.283683   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.285985   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.286465   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.286473   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.286481   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.286485   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.288565   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.782633   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:01.782651   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.782659   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.782664   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.785843   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:01.786296   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:01.786304   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.786309   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.786314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.788345   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.788771   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.788779   12253 pod_ready.go:82] duration metric: took 1.506279407s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788786   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.788823   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:01.788828   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.788833   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.788838   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.790798   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:01.791160   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:01.791171   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.791184   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.791187   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.793250   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:01.793611   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:01.793620   12253 pod_ready.go:82] duration metric: took 4.828788ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.793631   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:01.837481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:01.837495   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:01.837504   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:01.837509   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:01.840718   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.037469   12253 request.go:632] Waited for 196.356353ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037506   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:02.037512   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.037520   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.037525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.040221   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.040550   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:02.040560   12253 pod_ready.go:82] duration metric: took 246.922589ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.040567   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:02.237374   12253 request.go:632] Waited for 196.770161ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237419   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.237430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.237436   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.237442   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.240098   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.437383   12253 request.go:632] Waited for 196.723319ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437429   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.437436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.437443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.437449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.440277   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:02.636447   12253 request.go:632] Waited for 94.227022ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636509   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:02.636516   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.636524   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.636528   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.640095   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:02.837639   12253 request.go:632] Waited for 197.104367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837707   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:02.837717   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:02.837763   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:02.837788   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:02.841651   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:03.040768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.040781   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.040789   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.040793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.043403   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:03.236506   12253 request.go:632] Waited for 192.559607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236606   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.236618   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.236631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.236637   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.240751   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.540928   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:03.540954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.540973   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.540980   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.545016   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:03.637802   12253 request.go:632] Waited for 92.404425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637881   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:03.637890   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:03.637902   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:03.637910   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:03.642163   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.041768   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.041794   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.041804   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.041813   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.046193   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:04.047251   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.047260   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.047266   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.047277   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.056137   12253 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 12:06:04.056428   12253 pod_ready.go:103] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:04.541406   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:04.541425   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.541434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.541439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.544224   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:04.544684   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:04.544691   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:04.544697   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:04.544707   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:04.547090   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.040907   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:05.040922   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.040930   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.040934   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.044733   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.045134   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:05.045143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.045149   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.045152   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.047168   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:05.047571   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.047581   12253 pod_ready.go:82] duration metric: took 3.007003521s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047587   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.047621   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:05.047626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.047631   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.047636   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.049432   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:05.236368   12253 request.go:632] Waited for 186.419986ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236497   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:05.236514   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.236525   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.236532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.239828   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.240204   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.240214   12253 pod_ready.go:82] duration metric: took 192.620801ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.240220   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.435846   12253 request.go:632] Waited for 195.558833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:05.435906   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.435914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.435921   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.438946   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.636650   12253 request.go:632] Waited for 197.107158ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636711   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:05.636719   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.636728   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.636733   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.639926   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:05.640212   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:05.640221   12253 pod_ready.go:82] duration metric: took 399.995302ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.640232   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:05.837401   12253 request.go:632] Waited for 197.103806ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:05.837486   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:05.837513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:05.837523   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:05.840662   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.035821   12253 request.go:632] Waited for 194.603254ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:06.035950   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.035962   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.035968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.039252   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.039561   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.039571   12253 pod_ready.go:82] duration metric: took 399.332528ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.039578   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.236804   12253 request.go:632] Waited for 197.127943ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236841   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:06.236849   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.236856   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.236861   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.239571   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.435983   12253 request.go:632] Waited for 195.836904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436083   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:06.436095   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.436107   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.436115   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.440028   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.440297   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.440306   12253 pod_ready.go:82] duration metric: took 400.722778ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.440313   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.635911   12253 request.go:632] Waited for 195.558637ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635989   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:06.635997   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.636005   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.636009   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.638766   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:06.836563   12253 request.go:632] Waited for 197.42239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836630   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:06.836640   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:06.836651   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:06.836656   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:06.840182   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:06.840437   12253 pod_ready.go:93] pod "kube-proxy-8hww6" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:06.840446   12253 pod_ready.go:82] duration metric: took 400.127213ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:06.840453   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.036000   12253 request.go:632] Waited for 195.50345ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:07.036078   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.036093   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.036101   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.039960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.237550   12253 request.go:632] Waited for 197.186932ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237618   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:07.237627   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.237638   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.237645   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.241824   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:07.242186   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.242196   12253 pod_ready.go:82] duration metric: took 401.736827ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.242202   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.437080   12253 request.go:632] Waited for 194.824311ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437120   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:07.437127   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.437134   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.437177   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.439746   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:07.636668   12253 request.go:632] Waited for 196.435868ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636764   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:07.636773   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.636784   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.636790   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.640555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:07.640971   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:07.640979   12253 pod_ready.go:82] duration metric: took 398.771488ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.640986   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:07.837782   12253 request.go:632] Waited for 196.72045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837885   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:07.837895   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:07.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:07.837913   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:07.841222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.037474   12253 request.go:632] Waited for 195.707367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037543   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.037551   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.037559   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.037564   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.041008   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.237863   12253 request.go:632] Waited for 96.589125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238009   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.238027   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.238039   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.238064   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.241278   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.436102   12253 request.go:632] Waited for 194.439362ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436137   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.436143   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.436151   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.436183   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.439043   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:08.642356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:08.642376   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.642388   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.642397   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.645933   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:08.837859   12253 request.go:632] Waited for 191.363155ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837895   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:08.837900   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:08.837907   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:08.837911   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:08.841081   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.141167   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.141182   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.141191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.141195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.144158   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.235895   12253 request.go:632] Waited for 91.258445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235957   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.235964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.235972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.235977   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.239065   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:09.641494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:09.641508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.641517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.641521   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.644350   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.644757   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:09.644765   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:09.644771   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:09.644774   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:09.647091   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:09.647426   12253 pod_ready.go:103] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:10.141899   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:10.141923   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.141934   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.141941   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.145540   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.145973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.145981   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.145987   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.145989   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.148176   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.148538   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.148547   12253 pod_ready.go:82] duration metric: took 2.507551998s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.148554   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.235772   12253 request.go:632] Waited for 87.183047ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235805   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:10.235811   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.235831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.235849   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.238046   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.437551   12253 request.go:632] Waited for 199.151796ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437619   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:10.437626   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.437643   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.437648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.440639   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:10.440964   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.440974   12253 pod_ready.go:82] duration metric: took 292.414078ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.440981   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.636354   12253 request.go:632] Waited for 195.279783ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636426   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:10.636437   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.636450   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.636456   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.641024   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:10.836907   12253 request.go:632] Waited for 195.513588ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.836991   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:10.837001   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:10.837012   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:10.837020   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:10.840787   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:10.841194   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:10.841203   12253 pod_ready.go:82] duration metric: took 400.216153ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:10.841209   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.036390   12253 request.go:632] Waited for 195.137597ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036488   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:11.036499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.036510   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.036517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.040104   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:11.236464   12253 request.go:632] Waited for 195.741522ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236494   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:11.236499   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.236507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.236513   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.244008   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:11.244389   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:11.244399   12253 pod_ready.go:82] duration metric: took 403.184015ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:11.244409   12253 pod_ready.go:39] duration metric: took 11.008775818s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:11.244428   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:11.244490   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:11.260044   12253 api_server.go:72] duration metric: took 31.088552933s to wait for apiserver process to appear ...
	I0906 12:06:11.260057   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:11.260076   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:11.268665   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:11.268720   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:11.268725   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.268730   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.268734   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.269258   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:11.269330   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:11.269341   12253 api_server.go:131] duration metric: took 9.279203ms to wait for apiserver health ...
	I0906 12:06:11.269351   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:11.436974   12253 request.go:632] Waited for 167.586901ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437022   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.437029   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.437043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.437047   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.441302   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:11.447157   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:11.447183   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447192   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.447198   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.447201   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.447204   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.447208   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447211   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.447214   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.447218   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.447223   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.447228   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.447232   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.447237   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.447241   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.447244   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.447247   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.447253   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.447258   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.447264   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.447268   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.447270   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.447273   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.447276   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.447294   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.447303   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.447308   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.447313   12253 system_pods.go:74] duration metric: took 177.956833ms to wait for pod list to return data ...
	I0906 12:06:11.447319   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:11.637581   12253 request.go:632] Waited for 190.208152ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637651   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:11.637657   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.637664   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.637668   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.650462   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:11.650666   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:11.650678   12253 default_sa.go:55] duration metric: took 203.353142ms for default service account to be created ...
	I0906 12:06:11.650687   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:11.837096   12253 request.go:632] Waited for 186.371823ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837128   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:11.837134   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:11.837139   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:11.837143   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:11.866992   12253 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0906 12:06:11.873145   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:11.873167   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873175   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:06:11.873181   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:11.873185   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:11.873188   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:11.873195   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873199   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:11.873202   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:11.873206   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0906 12:06:11.873211   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:06:11.873215   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:11.873219   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:11.873223   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:06:11.873227   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:11.873231   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:11.873233   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:11.873236   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:11.873240   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 12:06:11.873244   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:11.873247   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:11.873252   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:11.873256   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:11.873259   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:11.873262   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:11.873265   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:11.873268   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:11.873274   12253 system_pods.go:126] duration metric: took 222.581886ms to wait for k8s-apps to be running ...
	I0906 12:06:11.873283   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:11.873340   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:11.886025   12253 system_svc.go:56] duration metric: took 12.733456ms WaitForService to wait for kubelet
	I0906 12:06:11.886050   12253 kubeadm.go:582] duration metric: took 31.714560483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:11.886086   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:12.036232   12253 request.go:632] Waited for 150.073414ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036268   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:12.036273   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:12.036286   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:12.036290   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:12.048789   12253 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 12:06:12.049838   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049855   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049868   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049873   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049876   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049881   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049884   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:12.049888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:12.049893   12253 node_conditions.go:105] duration metric: took 163.797553ms to run NodePressure ...
	I0906 12:06:12.049902   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:12.049922   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:12.087274   12253 out.go:201] 
	I0906 12:06:12.123635   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:12.123705   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.161370   12253 out.go:177] * Starting "ha-343000-m03" control-plane node in "ha-343000" cluster
	I0906 12:06:12.219408   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:12.219442   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:12.219591   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:12.219605   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:12.219694   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.220349   12253 start.go:360] acquireMachinesLock for ha-343000-m03: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:12.220455   12253 start.go:364] duration metric: took 68.753µs to acquireMachinesLock for "ha-343000-m03"
	I0906 12:06:12.220476   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:12.220482   12253 fix.go:54] fixHost starting: m03
	I0906 12:06:12.220813   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:12.220843   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:12.230327   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56369
	I0906 12:06:12.230794   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:12.231264   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:12.231284   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:12.231543   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:12.231691   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.231816   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 12:06:12.231923   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.232050   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 12:06:12.233006   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.233040   12253 fix.go:112] recreateIfNeeded on ha-343000-m03: state=Stopped err=<nil>
	I0906 12:06:12.233052   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	W0906 12:06:12.233162   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:12.271360   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m03" ...
	I0906 12:06:12.312281   12253 main.go:141] libmachine: (ha-343000-m03) Calling .Start
	I0906 12:06:12.312472   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.312588   12253 main.go:141] libmachine: (ha-343000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid
	I0906 12:06:12.314085   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid 10460 missing from process table
	I0906 12:06:12.314111   12253 main.go:141] libmachine: (ha-343000-m03) DBG | pid 10460 is in state "Stopped"
	I0906 12:06:12.314145   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid...
	I0906 12:06:12.314314   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Using UUID 5abf6194-a669-4f35-b6fc-c88bfc629e81
	I0906 12:06:12.392247   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Generated MAC 3e:84:3d:bc:9c:31
	I0906 12:06:12.392279   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:12.392453   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392498   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5abf6194-a669-4f35-b6fc-c88bfc629e81", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003ac9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:12.392570   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5abf6194-a669-4f35-b6fc-c88bfc629e81", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:12.392621   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5abf6194-a669-4f35-b6fc-c88bfc629e81 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/ha-343000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:12.392631   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:12.394468   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 DEBUG: hyperkit: Pid is 12285
	I0906 12:06:12.395082   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Attempt 0
	I0906 12:06:12.395129   12253 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:12.395296   12253 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 12285
	I0906 12:06:12.398168   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Searching for 3e:84:3d:bc:9c:31 in /var/db/dhcpd_leases ...
	I0906 12:06:12.398286   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:12.398303   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:12.398316   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:12.398325   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:12.398339   12253 main.go:141] libmachine: (ha-343000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca1e7}
	I0906 12:06:12.398359   12253 main.go:141] libmachine: (ha-343000-m03) DBG | Found match: 3e:84:3d:bc:9c:31
	I0906 12:06:12.398382   12253 main.go:141] libmachine: (ha-343000-m03) DBG | IP: 192.169.0.26
	I0906 12:06:12.398414   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetConfigRaw
	I0906 12:06:12.399172   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:12.399462   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:12.400029   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:12.400042   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:12.400184   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:12.400344   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:12.400464   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400591   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:12.400728   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:12.400904   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:12.401165   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:12.401176   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:12.404210   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:12.438119   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:12.439198   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.439227   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.439241   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.439256   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.845267   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:12.845282   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:12.960204   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:12.960224   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:12.960244   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:12.960258   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:12.961041   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:12.961054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:12 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:06:18.729819   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:06:18.729887   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:06:18.729898   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:06:18.753054   12253 main.go:141] libmachine: (ha-343000-m03) DBG | 2024/09/06 12:06:18 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:06:23.465534   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:06:23.465548   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465717   12253 buildroot.go:166] provisioning hostname "ha-343000-m03"
	I0906 12:06:23.465726   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.465818   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.465902   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.465981   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466055   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.466146   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.466265   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.466412   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.466421   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m03 && echo "ha-343000-m03" | sudo tee /etc/hostname
	I0906 12:06:23.536843   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m03
	
	I0906 12:06:23.536860   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.536985   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.537079   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537171   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.537236   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.537354   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.537507   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.537525   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:06:23.606665   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:06:23.606681   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:06:23.606695   12253 buildroot.go:174] setting up certificates
	I0906 12:06:23.606700   12253 provision.go:84] configureAuth start
	I0906 12:06:23.606707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetMachineName
	I0906 12:06:23.606846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:23.606946   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.607022   12253 provision.go:143] copyHostCerts
	I0906 12:06:23.607051   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607104   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:06:23.607112   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:06:23.607235   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:06:23.607441   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607476   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:06:23.607482   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:06:23.607552   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:06:23.607719   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607747   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:06:23.607752   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:06:23.607836   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:06:23.607981   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m03 san=[127.0.0.1 192.169.0.26 ha-343000-m03 localhost minikube]
	I0906 12:06:23.699873   12253 provision.go:177] copyRemoteCerts
	I0906 12:06:23.699921   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:06:23.699935   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.700077   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.700175   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.700270   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.700376   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:23.737703   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:06:23.737771   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:06:23.757756   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:06:23.757827   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:06:23.777598   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:06:23.777673   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:06:23.797805   12253 provision.go:87] duration metric: took 191.09552ms to configureAuth
	I0906 12:06:23.797818   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:06:23.797988   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:23.798002   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:23.798134   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.798231   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.798314   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798400   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.798488   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.798597   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.798724   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.798732   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:06:23.860492   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:06:23.860504   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:06:23.860586   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:06:23.860599   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.860730   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.860807   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.860907   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.861010   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.861140   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.861285   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.861332   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:06:23.935021   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:06:23.935039   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:23.935186   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:23.935286   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935371   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:23.935478   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:23.935609   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:23.935750   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:23.935762   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:06:25.580352   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:06:25.580366   12253 machine.go:96] duration metric: took 13.180301802s to provisionDockerMachine
	I0906 12:06:25.580373   12253 start.go:293] postStartSetup for "ha-343000-m03" (driver="hyperkit")
	I0906 12:06:25.580380   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:06:25.580394   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.580572   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:06:25.580585   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.580672   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.580761   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.580846   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.580931   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.621691   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:06:25.626059   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:06:25.626069   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:06:25.626156   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:06:25.626292   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:06:25.626299   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:06:25.626479   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:06:25.640080   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:25.666256   12253 start.go:296] duration metric: took 85.87411ms for postStartSetup
	I0906 12:06:25.666279   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.666455   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:06:25.666469   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.666570   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.666655   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.666734   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.666815   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.704275   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:06:25.704337   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:06:25.737458   12253 fix.go:56] duration metric: took 13.516946704s for fixHost
	I0906 12:06:25.737482   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.737626   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.737732   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737832   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.737920   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.738049   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:25.738192   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.26 22 <nil> <nil>}
	I0906 12:06:25.738199   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:06:25.803149   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649585.904544960
	
	I0906 12:06:25.803162   12253 fix.go:216] guest clock: 1725649585.904544960
	I0906 12:06:25.803168   12253 fix.go:229] Guest: 2024-09-06 12:06:25.90454496 -0700 PDT Remote: 2024-09-06 12:06:25.737472 -0700 PDT m=+83.951104505 (delta=167.07296ms)
	I0906 12:06:25.803178   12253 fix.go:200] guest clock delta is within tolerance: 167.07296ms
	I0906 12:06:25.803182   12253 start.go:83] releasing machines lock for "ha-343000-m03", held for 13.582690615s
	I0906 12:06:25.803198   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.803329   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:25.825405   12253 out.go:177] * Found network options:
	I0906 12:06:25.846508   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25
	W0906 12:06:25.867569   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.867608   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.867639   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868707   12253 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 12:06:25.868819   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:06:25.868894   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	W0906 12:06:25.868907   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:06:25.868930   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:06:25.869032   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:06:25.869046   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 12:06:25.869089   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869194   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 12:06:25.869217   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869337   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 12:06:25.869358   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869497   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 12:06:25.869516   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 12:06:25.869640   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	W0906 12:06:25.904804   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:06:25.904860   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:06:25.953607   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:06:25.953623   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:25.953707   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:25.969069   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:06:25.977320   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:06:25.985732   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:06:25.985790   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:06:25.994169   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.002564   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:06:26.011076   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:06:26.019409   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:06:26.027829   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:06:26.036100   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:06:26.044789   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:06:26.053382   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:06:26.060878   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:06:26.068234   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.161656   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:06:26.180419   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:06:26.180540   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:06:26.197783   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.208495   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:06:26.223788   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:06:26.234758   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.245879   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:06:26.268201   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:06:26.279748   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:06:26.298675   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:06:26.301728   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:06:26.309959   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:06:26.323781   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:06:26.418935   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:06:26.520404   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:06:26.520429   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:06:26.534785   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:26.635772   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:06:28.931869   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.296074778s)
	I0906 12:06:28.931929   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:06:28.943824   12253 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0906 12:06:28.959441   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:28.970674   12253 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:06:29.066042   12253 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:06:29.168956   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.286202   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:06:29.299988   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:06:29.311495   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:29.429259   12253 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:06:29.496621   12253 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:06:29.496705   12253 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:06:29.502320   12253 start.go:563] Will wait 60s for crictl version
	I0906 12:06:29.502374   12253 ssh_runner.go:195] Run: which crictl
	I0906 12:06:29.505587   12253 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:06:29.534004   12253 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:06:29.534083   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.551834   12253 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:06:29.590600   12253 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:06:29.632268   12253 out.go:177]   - env NO_PROXY=192.169.0.24
	I0906 12:06:29.653333   12253 out.go:177]   - env NO_PROXY=192.169.0.24,192.169.0.25
	I0906 12:06:29.674153   12253 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 12:06:29.674373   12253 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:06:29.677525   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:29.687202   12253 mustload.go:65] Loading cluster: ha-343000
	I0906 12:06:29.687389   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:29.687610   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.687639   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.696472   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56391
	I0906 12:06:29.696894   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.697234   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.697246   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.697502   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.697641   12253 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 12:06:29.697736   12253 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:29.697809   12253 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 12266
	I0906 12:06:29.698794   12253 host.go:66] Checking if "ha-343000" exists ...
	I0906 12:06:29.699046   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:29.699070   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:29.707791   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56393
	I0906 12:06:29.708136   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:29.708457   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:29.708468   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:29.708696   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:29.708812   12253 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 12:06:29.708911   12253 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000 for IP: 192.169.0.26
	I0906 12:06:29.708917   12253 certs.go:194] generating shared ca certs ...
	I0906 12:06:29.708928   12253 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:06:29.709069   12253 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:06:29.709123   12253 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:06:29.709132   12253 certs.go:256] generating profile certs ...
	I0906 12:06:29.709257   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key
	I0906 12:06:29.709340   12253 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key.e464bc73
	I0906 12:06:29.709394   12253 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key
	I0906 12:06:29.709401   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:06:29.709422   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:06:29.709447   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:06:29.709465   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:06:29.709482   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:06:29.709510   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:06:29.709528   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:06:29.709550   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:06:29.709623   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:06:29.709661   12253 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:06:29.709669   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:06:29.709702   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:06:29.709732   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:06:29.709766   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:06:29.709833   12253 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:06:29.709868   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:06:29.709889   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:06:29.709908   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:29.709932   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 12:06:29.710030   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 12:06:29.710110   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 12:06:29.710211   12253 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 12:06:29.710304   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 12:06:29.742607   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 12:06:29.746569   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 12:06:29.754558   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 12:06:29.757841   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 12:06:29.765881   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 12:06:29.769140   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 12:06:29.778234   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 12:06:29.781483   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0906 12:06:29.789701   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 12:06:29.792877   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 12:06:29.801155   12253 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 12:06:29.804562   12253 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0906 12:06:29.812907   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:06:29.833527   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:06:29.854042   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:06:29.874274   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:06:29.894675   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 12:06:29.914759   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 12:06:29.935020   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:06:29.955774   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 12:06:29.976174   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:06:29.996348   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:06:30.016705   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:06:30.036752   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 12:06:30.050816   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 12:06:30.064469   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 12:06:30.078121   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0906 12:06:30.092155   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 12:06:30.106189   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0906 12:06:30.120313   12253 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 12:06:30.134091   12253 ssh_runner.go:195] Run: openssl version
	I0906 12:06:30.138549   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:06:30.147484   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151103   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.151157   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:06:30.155470   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:06:30.164282   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:06:30.173035   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176736   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.176783   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:06:30.181161   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:06:30.189862   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:06:30.198669   12253 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202224   12253 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.202268   12253 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:06:30.206651   12253 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:06:30.215322   12253 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:06:30.218903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:06:30.223374   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:06:30.227903   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:06:30.232564   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:06:30.237667   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:06:30.242630   12253 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:06:30.247576   12253 kubeadm.go:934] updating node {m03 192.169.0.26 8443 v1.31.0 docker true true} ...
	I0906 12:06:30.247652   12253 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-343000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-343000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:06:30.247670   12253 kube-vip.go:115] generating kube-vip config ...
	I0906 12:06:30.247719   12253 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 12:06:30.261197   12253 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 12:06:30.261239   12253 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 12:06:30.261300   12253 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:06:30.269438   12253 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:06:30.269496   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 12:06:30.277362   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 12:06:30.291520   12253 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:06:30.305340   12253 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0906 12:06:30.319495   12253 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0906 12:06:30.322637   12253 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:06:30.332577   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.441240   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.456369   12253 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.26 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:06:30.456602   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:30.477910   12253 out.go:177] * Verifying Kubernetes components...
	I0906 12:06:30.498557   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:06:30.628440   12253 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:06:30.645947   12253 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:06:30.646165   12253 kapi.go:59] client config for ha-343000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xe57cae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 12:06:30.646208   12253 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.24:8443
	I0906 12:06:30.646371   12253 node_ready.go:35] waiting up to 6m0s for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.646412   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:30.646417   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.646423   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.646427   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649121   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:30.649426   12253 node_ready.go:49] node "ha-343000-m03" has status "Ready":"True"
	I0906 12:06:30.649435   12253 node_ready.go:38] duration metric: took 3.055625ms for node "ha-343000-m03" to be "Ready" ...
	I0906 12:06:30.649441   12253 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:30.649480   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:30.649485   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.649491   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.649496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.655093   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:30.660461   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:30.660533   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:30.660539   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.660545   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.660550   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.664427   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:30.664864   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:30.664872   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:30.664877   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:30.664880   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:30.667569   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.161508   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.161522   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.161528   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.161531   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.164411   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.165052   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.165061   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.165070   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.165074   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.167897   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:31.660843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:31.660861   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.660868   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.660871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.668224   12253 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:06:31.668938   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:31.668954   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:31.668969   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:31.668987   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:31.674737   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:32.161451   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.161468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.161496   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.161501   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.164555   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.165061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.165069   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.165075   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.165078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.167689   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.661269   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:32.661285   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.661294   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.661316   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.664943   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:32.665460   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:32.665469   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:32.665475   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:32.665479   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:32.667934   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:32.668229   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:33.161930   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.161964   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.161971   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.161975   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.165689   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.166478   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.166488   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.166497   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.166503   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.169565   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.660809   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:33.660831   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.660841   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.660846   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.664137   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:33.665061   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:33.665071   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:33.665078   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:33.665099   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:33.667811   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.161378   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.161391   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.161398   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.161403   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.165094   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:34.165523   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.165531   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.165537   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.165540   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.167949   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.661206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:34.661222   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.661228   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.661230   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.663772   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:34.664499   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:34.664507   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:34.664513   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:34.664517   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:34.666543   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.161667   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.161689   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.161700   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.161705   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.166875   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.167311   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.167319   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.167324   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.167328   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.172902   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:35.173323   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:35.661973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:35.661988   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.661994   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.661998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.664583   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:35.664981   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:35.664989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:35.664998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:35.665001   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:35.667322   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.161747   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.161785   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.161793   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.161796   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.164939   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.165450   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.165459   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.165464   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.165474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.167808   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:36.661492   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:36.661508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.661532   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.661537   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.664941   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:36.665455   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:36.665464   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:36.665471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:36.665474   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:36.668192   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.161660   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.161678   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.161685   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.161688   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.164012   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.164541   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.164549   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.164555   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.164558   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.166577   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.662457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:37.662494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.662505   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.662511   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.665311   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.666039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:37.666048   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:37.666053   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:37.666056   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:37.668294   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:37.668600   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:38.162628   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.162646   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.162654   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.162659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.165660   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.166284   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.166292   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.166298   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.166301   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.168559   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.662170   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:38.662185   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.662191   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.662195   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.664733   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:38.665194   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:38.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:38.665207   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:38.665211   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:38.667563   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.161491   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.161508   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.161517   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.161522   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.164370   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.164762   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.164770   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.164776   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.164780   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.166614   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:39.661843   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:39.661860   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.661866   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.661871   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.664287   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:39.664950   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:39.664958   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:39.664964   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:39.664968   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:39.667194   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.160891   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.160921   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.160933   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.160955   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.165388   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:40.166039   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.166047   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.166052   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.166055   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.168212   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.168635   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:40.661892   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:40.661907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.661914   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.661917   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.664471   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:40.664962   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:40.664970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:40.664975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:40.664984   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:40.667379   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.160779   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.160797   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.160824   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.160830   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.163878   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:41.164433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.164441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.164446   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.164451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.166991   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.661124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:41.661138   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.661145   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.661149   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.663595   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:41.664206   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:41.664214   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:41.664220   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:41.664224   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:41.666219   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:42.161906   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.161926   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.161937   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.161945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.165222   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:42.165752   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.165760   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.165765   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.165769   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.167913   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.661255   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:42.661274   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.661282   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.661288   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.664242   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.664689   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:42.664697   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:42.664703   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:42.664706   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:42.666742   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:42.667053   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:43.161512   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.161530   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.161565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.161575   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.164590   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:43.165234   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.165242   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.165254   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.165258   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.167961   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.660826   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:43.660844   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.660873   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.660882   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.663557   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:43.663959   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:43.663966   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:43.663972   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:43.663976   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:43.665816   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.162103   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.162133   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.162158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.162164   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.165060   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.165598   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.165606   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.165612   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.165615   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.167589   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.662307   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:44.662328   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.662339   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.662344   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.665063   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:44.665602   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:44.665610   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:44.665615   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:44.665619   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:44.667607   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:44.667948   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:45.161277   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.161307   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.161314   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.161317   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.163751   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.164201   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.164209   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.164215   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.164217   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.166274   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.662080   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:45.662099   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.662106   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.662110   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.664692   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:45.665145   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:45.665152   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:45.665158   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:45.665162   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:45.667158   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.161983   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.162002   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.162011   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.162016   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.165135   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.165638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.165645   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.165650   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.165654   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.167660   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:46.660973   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:46.661022   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.661036   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.661046   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.664600   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:46.665041   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:46.665051   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:46.665056   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:46.665061   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:46.667006   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:47.161827   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.161883   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.161895   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.161902   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.165549   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:47.166029   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.166037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.166041   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.166045   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.168233   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:47.168577   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:47.661554   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:47.661603   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.661616   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.661625   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.665796   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:47.666259   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:47.666266   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:47.666272   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:47.666276   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:47.668466   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.161876   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.161891   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.161898   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.161901   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.164419   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.164835   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.164843   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.164849   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.164853   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.166837   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:48.661562   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:48.661577   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.661598   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.661603   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.663972   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:48.664457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:48.664465   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:48.664470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:48.664475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:48.666445   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:49.161410   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.161430   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.161438   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.161443   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.164478   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:49.164982   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.164989   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.164995   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.164998   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.167071   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.660698   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:49.660724   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.660736   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.660742   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.664916   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:49.665349   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:49.665357   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:49.665363   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:49.665367   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:49.667392   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:49.667753   12253 pod_ready.go:103] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"False"
	I0906 12:06:50.161030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.161065   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.161073   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.161080   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.163537   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.163963   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.163970   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.163975   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.163979   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.166093   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.661184   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:50.661238   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.661263   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.661267   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.663637   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:50.664117   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:50.664125   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:50.664131   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:50.664134   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:50.666067   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.161515   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.161550   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.161557   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.161561   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.163979   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.164681   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.164690   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.164694   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.164697   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.166790   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.661266   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-99jtt
	I0906 12:06:51.661291   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.661374   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.661387   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.664772   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:51.665195   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.665202   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.665206   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.665216   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.667400   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.667769   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.667779   12253 pod_ready.go:82] duration metric: took 21.007261829s for pod "coredns-6f6b679f8f-99jtt" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667785   12253 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.667821   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q4rhs
	I0906 12:06:51.667826   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.667831   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.667836   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.669791   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.670205   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.670213   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.670218   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.670221   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.672346   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.672671   12253 pod_ready.go:93] pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.672679   12253 pod_ready.go:82] duration metric: took 4.889471ms for pod "coredns-6f6b679f8f-q4rhs" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672685   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.672718   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000
	I0906 12:06:51.672723   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.672729   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.672737   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.674649   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.675030   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:51.675037   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.675043   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.675046   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.676915   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.677288   12253 pod_ready.go:93] pod "etcd-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.677297   12253 pod_ready.go:82] duration metric: took 4.607311ms for pod "etcd-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677303   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.677339   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m02
	I0906 12:06:51.677344   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.677349   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.677352   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.679418   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.679897   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:51.679907   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.679916   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.679920   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.681919   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.682327   12253 pod_ready.go:93] pod "etcd-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.682336   12253 pod_ready.go:82] duration metric: took 5.028149ms for pod "etcd-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682343   12253 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.682376   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/etcd-ha-343000-m03
	I0906 12:06:51.682381   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.682386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.682389   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.684781   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:51.685200   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:51.685207   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.685212   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.685215   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.687181   12253 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:06:51.687676   12253 pod_ready.go:93] pod "etcd-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:51.687685   12253 pod_ready.go:82] duration metric: took 5.337542ms for pod "etcd-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.687696   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:51.862280   12253 request.go:632] Waited for 174.544275ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862360   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000
	I0906 12:06:51.862372   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:51.862382   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:51.862386   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:51.865455   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.062085   12253 request.go:632] Waited for 196.080428ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062124   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:52.062130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.062136   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.062140   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.064928   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.065322   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.065331   12253 pod_ready.go:82] duration metric: took 377.628905ms for pod "kube-apiserver-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.065338   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.261393   12253 request.go:632] Waited for 196.009549ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261459   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m02
	I0906 12:06:52.261471   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.261485   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.261492   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.265336   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.461317   12253 request.go:632] Waited for 195.311084ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461356   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:52.461362   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.461370   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.461376   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.464202   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.464645   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.464654   12253 pod_ready.go:82] duration metric: took 399.309786ms for pod "kube-apiserver-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.464661   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.662233   12253 request.go:632] Waited for 197.535092ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662290   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-343000-m03
	I0906 12:06:52.662297   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.662305   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.662311   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.665143   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:52.862031   12253 request.go:632] Waited for 196.411368ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862119   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:52.862130   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:52.862140   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:52.862145   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:52.866136   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:52.866533   12253 pod_ready.go:93] pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:52.866543   12253 pod_ready.go:82] duration metric: took 401.876526ms for pod "kube-apiserver-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:52.866550   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.061387   12253 request.go:632] Waited for 194.796135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061453   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000
	I0906 12:06:53.061462   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.061470   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.061476   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.064293   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:53.261526   12253 request.go:632] Waited for 196.74771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261638   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:53.261649   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.261659   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.261674   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.265603   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.266028   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.266036   12253 pod_ready.go:82] duration metric: took 399.480241ms for pod "kube-controller-manager-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.266042   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.461478   12253 request.go:632] Waited for 195.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461556   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m02
	I0906 12:06:53.461564   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.461571   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.461576   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.464932   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.661907   12253 request.go:632] Waited for 196.48537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661965   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:53.661991   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.661998   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.662002   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.665079   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:53.665555   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:53.665565   12253 pod_ready.go:82] duration metric: took 399.515968ms for pod "kube-controller-manager-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.665572   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:53.861347   12253 request.go:632] Waited for 195.73444ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861414   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-343000-m03
	I0906 12:06:53.861426   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:53.861434   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:53.861439   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:53.864177   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:54.061465   12253 request.go:632] Waited for 196.861398ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061517   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.061554   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.061565   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.061570   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.064700   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.065020   12253 pod_ready.go:93] pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.065030   12253 pod_ready.go:82] duration metric: took 399.451485ms for pod "kube-controller-manager-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.065037   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.263289   12253 request.go:632] Waited for 198.174584ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263384   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8hww6
	I0906 12:06:54.263411   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.263436   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.263461   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.266722   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.461554   12253 request.go:632] Waited for 194.387224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461599   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m04
	I0906 12:06:54.461609   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.461620   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.461627   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.465162   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.465533   12253 pod_ready.go:98] node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465543   12253 pod_ready.go:82] duration metric: took 400.500434ms for pod "kube-proxy-8hww6" in "kube-system" namespace to be "Ready" ...
	E0906 12:06:54.465549   12253 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-343000-m04" hosting pod "kube-proxy-8hww6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-343000-m04" has status "Ready":"Unknown"
	I0906 12:06:54.465555   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.662665   12253 request.go:632] Waited for 197.074891ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662731   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r285j
	I0906 12:06:54.662740   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.662749   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.662755   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.665777   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.862800   12253 request.go:632] Waited for 196.680356ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862911   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:54.862924   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:54.862936   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:54.862945   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:54.866911   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:54.867361   12253 pod_ready.go:93] pod "kube-proxy-r285j" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:54.867371   12253 pod_ready.go:82] duration metric: took 401.810264ms for pod "kube-proxy-r285j" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:54.867377   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.062512   12253 request.go:632] Waited for 195.060729ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062609   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x6pfk
	I0906 12:06:55.062629   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.062641   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.062648   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.066272   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:55.263362   12253 request.go:632] Waited for 196.717271ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263483   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:55.263494   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.263507   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.263520   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.268072   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.268453   12253 pod_ready.go:93] pod "kube-proxy-x6pfk" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.268462   12253 pod_ready.go:82] duration metric: took 401.079128ms for pod "kube-proxy-x6pfk" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.268469   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.462230   12253 request.go:632] Waited for 193.721938ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462312   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zjx8z
	I0906 12:06:55.462320   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.462348   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.462357   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.465173   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:55.662089   12253 request.go:632] Waited for 196.464134ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662239   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:55.662255   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.662267   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.662275   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.666427   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:55.666704   12253 pod_ready.go:93] pod "kube-proxy-zjx8z" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:55.666714   12253 pod_ready.go:82] duration metric: took 398.240112ms for pod "kube-proxy-zjx8z" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.666721   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:55.861681   12253 request.go:632] Waited for 194.913797ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861767   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000
	I0906 12:06:55.861778   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:55.861790   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:55.861799   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:55.865874   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:56.063343   12253 request.go:632] Waited for 197.091674ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063481   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000
	I0906 12:06:56.063491   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.063501   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.063508   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.067298   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.067689   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.067699   12253 pod_ready.go:82] duration metric: took 400.971333ms for pod "kube-scheduler-ha-343000" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.067706   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.261328   12253 request.go:632] Waited for 193.578385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261416   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m02
	I0906 12:06:56.261431   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.261443   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.261451   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.264964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.461367   12253 request.go:632] Waited for 196.051039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461433   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m02
	I0906 12:06:56.461441   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.461449   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.461454   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.464367   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:56.464786   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.464799   12253 pod_ready.go:82] duration metric: took 397.083037ms for pod "kube-scheduler-ha-343000-m02" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.464806   12253 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.662171   12253 request.go:632] Waited for 197.309952ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662326   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-343000-m03
	I0906 12:06:56.662340   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.662352   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.662363   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.665960   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.862106   12253 request.go:632] Waited for 195.559257ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862214   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes/ha-343000-m03
	I0906 12:06:56.862225   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.862236   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.862243   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.866072   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:56.866312   12253 pod_ready.go:93] pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 12:06:56.866321   12253 pod_ready.go:82] duration metric: took 401.509457ms for pod "kube-scheduler-ha-343000-m03" in "kube-system" namespace to be "Ready" ...
	I0906 12:06:56.866329   12253 pod_ready.go:39] duration metric: took 26.216828833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:06:56.866341   12253 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:06:56.866386   12253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:06:56.878910   12253 api_server.go:72] duration metric: took 26.422463192s to wait for apiserver process to appear ...
	I0906 12:06:56.878922   12253 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:06:56.878935   12253 api_server.go:253] Checking apiserver healthz at https://192.169.0.24:8443/healthz ...
	I0906 12:06:56.883745   12253 api_server.go:279] https://192.169.0.24:8443/healthz returned 200:
	ok
	I0906 12:06:56.883791   12253 round_trippers.go:463] GET https://192.169.0.24:8443/version
	I0906 12:06:56.883796   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:56.883803   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:56.883808   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:56.884469   12253 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:06:56.884556   12253 api_server.go:141] control plane version: v1.31.0
	I0906 12:06:56.884568   12253 api_server.go:131] duration metric: took 5.641059ms to wait for apiserver health ...
	I0906 12:06:56.884573   12253 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:06:57.061374   12253 request.go:632] Waited for 176.731786ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061457   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.061468   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.061480   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.061487   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.066391   12253 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:06:57.071924   12253 system_pods.go:59] 26 kube-system pods found
	I0906 12:06:57.071938   12253 system_pods.go:61] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.071942   12253 system_pods.go:61] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.071945   12253 system_pods.go:61] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.071948   12253 system_pods.go:61] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.071952   12253 system_pods.go:61] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.071955   12253 system_pods.go:61] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.071958   12253 system_pods.go:61] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.071962   12253 system_pods.go:61] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.071964   12253 system_pods.go:61] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.071967   12253 system_pods.go:61] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.071973   12253 system_pods.go:61] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.071977   12253 system_pods.go:61] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.071979   12253 system_pods.go:61] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.071982   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.071985   12253 system_pods.go:61] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.071988   12253 system_pods.go:61] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.071991   12253 system_pods.go:61] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.071993   12253 system_pods.go:61] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.071996   12253 system_pods.go:61] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.071999   12253 system_pods.go:61] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.072001   12253 system_pods.go:61] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.072004   12253 system_pods.go:61] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.072007   12253 system_pods.go:61] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.072009   12253 system_pods.go:61] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.072012   12253 system_pods.go:61] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.072017   12253 system_pods.go:61] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.072022   12253 system_pods.go:74] duration metric: took 187.444826ms to wait for pod list to return data ...
	I0906 12:06:57.072029   12253 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:06:57.261398   12253 request.go:632] Waited for 189.325312ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261443   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:06:57.261451   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.261471   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.261475   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.264018   12253 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:06:57.264078   12253 default_sa.go:45] found service account: "default"
	I0906 12:06:57.264086   12253 default_sa.go:55] duration metric: took 192.051635ms for default service account to be created ...
	I0906 12:06:57.264103   12253 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:06:57.461307   12253 request.go:632] Waited for 197.162907ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461342   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/namespaces/kube-system/pods
	I0906 12:06:57.461347   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.461367   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.461393   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.466559   12253 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 12:06:57.471959   12253 system_pods.go:86] 26 kube-system pods found
	I0906 12:06:57.471969   12253 system_pods.go:89] "coredns-6f6b679f8f-99jtt" [c6f0e0e0-ca35-4dc7-a18c-9bd31487f371] Running
	I0906 12:06:57.471974   12253 system_pods.go:89] "coredns-6f6b679f8f-q4rhs" [c294684b-90fb-4f77-941a-10faef810a0c] Running
	I0906 12:06:57.471977   12253 system_pods.go:89] "etcd-ha-343000" [642a798f-3bac-487d-9ddd-8a53b51d3b46] Running
	I0906 12:06:57.471981   12253 system_pods.go:89] "etcd-ha-343000-m02" [de81e1f4-dfe8-442b-9d64-220519d30b24] Running
	I0906 12:06:57.471985   12253 system_pods.go:89] "etcd-ha-343000-m03" [d564233b-22e9-4873-8071-ef535ca1bd83] Running
	I0906 12:06:57.471989   12253 system_pods.go:89] "kindnet-5rtpx" [773838a1-3123-46ce-8dbb-9f916c3f4259] Running
	I0906 12:06:57.471992   12253 system_pods.go:89] "kindnet-9rf4h" [363a73a6-1851-4ec5-a0e5-6270edf75902] Running
	I0906 12:06:57.471994   12253 system_pods.go:89] "kindnet-ksnvp" [4888f026-8cd9-4122-a310-7f03c43c183d] Running
	I0906 12:06:57.471997   12253 system_pods.go:89] "kindnet-tj4jx" [76121e61-0d6a-4534-af9c-cbecff2c2939] Running
	I0906 12:06:57.472000   12253 system_pods.go:89] "kube-apiserver-ha-343000" [7d454013-0af1-4b0a-9677-b8f47cc7315d] Running
	I0906 12:06:57.472003   12253 system_pods.go:89] "kube-apiserver-ha-343000-m02" [1ed64c19-92ff-45a0-8f15-5043d7b1dcf0] Running
	I0906 12:06:57.472006   12253 system_pods.go:89] "kube-apiserver-ha-343000-m03" [5a6e83f6-3ee5-4283-91bb-c52b31c678fe] Running
	I0906 12:06:57.472009   12253 system_pods.go:89] "kube-controller-manager-ha-343000" [8ccf7c24-8b00-4035-90f3-e5681c501612] Running
	I0906 12:06:57.472012   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m02" [641fd911-9960-45a3-8d68-2675361935fd] Running
	I0906 12:06:57.472015   12253 system_pods.go:89] "kube-controller-manager-ha-343000-m03" [4a2c1044-215e-4598-85f0-b1c10b8966a5] Running
	I0906 12:06:57.472017   12253 system_pods.go:89] "kube-proxy-8hww6" [aa46eef9-733c-4f42-8c7c-ad0ed8009b8a] Running
	I0906 12:06:57.472020   12253 system_pods.go:89] "kube-proxy-r285j" [9ded92c7-abd8-4feb-9882-fd560067a7ec] Running
	I0906 12:06:57.472023   12253 system_pods.go:89] "kube-proxy-x6pfk" [00635268-0e0c-4053-916d-59dab9aa4272] Running
	I0906 12:06:57.472026   12253 system_pods.go:89] "kube-proxy-zjx8z" [10ff5bb6-ea0d-4fcd-a7e0-556c81bc6fcb] Running
	I0906 12:06:57.472029   12253 system_pods.go:89] "kube-scheduler-ha-343000" [e6174869-89a6-4e03-9e55-2c70be88786b] Running
	I0906 12:06:57.472031   12253 system_pods.go:89] "kube-scheduler-ha-343000-m02" [f0d8a780-a737-4b72-97cb-b9063f3e67f4] Running
	I0906 12:06:57.472034   12253 system_pods.go:89] "kube-scheduler-ha-343000-m03" [31d10255-ba89-4c31-b273-5602d5085146] Running
	I0906 12:06:57.472037   12253 system_pods.go:89] "kube-vip-ha-343000" [0a4a90d9-ad02-4ff9-8bce-19755faee6e9] Running
	I0906 12:06:57.472040   12253 system_pods.go:89] "kube-vip-ha-343000-m02" [a0e27399-b48e-4dcd-aa70-248480b78d32] Running
	I0906 12:06:57.472043   12253 system_pods.go:89] "kube-vip-ha-343000-m03" [87ab84a4-cd00-4339-84da-9bbd565e2c4d] Running
	I0906 12:06:57.472047   12253 system_pods.go:89] "storage-provisioner" [9815f44c-20e3-4243-8eb4-60cd42a850ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:06:57.472052   12253 system_pods.go:126] duration metric: took 207.94336ms to wait for k8s-apps to be running ...
	I0906 12:06:57.472059   12253 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:06:57.472107   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:06:57.483773   12253 system_svc.go:56] duration metric: took 11.709185ms WaitForService to wait for kubelet
	I0906 12:06:57.483792   12253 kubeadm.go:582] duration metric: took 27.027343725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:06:57.483805   12253 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:06:57.662348   12253 request.go:632] Waited for 178.494779ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662425   12253 round_trippers.go:463] GET https://192.169.0.24:8443/api/v1/nodes
	I0906 12:06:57.662436   12253 round_trippers.go:469] Request Headers:
	I0906 12:06:57.662448   12253 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:06:57.662457   12253 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:06:57.665964   12253 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:06:57.666853   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666864   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666872   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666875   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666879   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666882   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666885   12253 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:06:57.666888   12253 node_conditions.go:123] node cpu capacity is 2
	I0906 12:06:57.666892   12253 node_conditions.go:105] duration metric: took 183.082589ms to run NodePressure ...
	I0906 12:06:57.666899   12253 start.go:241] waiting for startup goroutines ...
	I0906 12:06:57.666913   12253 start.go:255] writing updated cluster config ...
	I0906 12:06:57.689595   12253 out.go:201] 
	I0906 12:06:57.710968   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:06:57.711085   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.733311   12253 out.go:177] * Starting "ha-343000-m04" worker node in "ha-343000" cluster
	I0906 12:06:57.776497   12253 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:06:57.776531   12253 cache.go:56] Caching tarball of preloaded images
	I0906 12:06:57.776758   12253 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:06:57.776776   12253 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:06:57.776887   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.777953   12253 start.go:360] acquireMachinesLock for ha-343000-m04: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:06:57.778066   12253 start.go:364] duration metric: took 90.409µs to acquireMachinesLock for "ha-343000-m04"
	I0906 12:06:57.778091   12253 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:06:57.778100   12253 fix.go:54] fixHost starting: m04
	I0906 12:06:57.778535   12253 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:06:57.778560   12253 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:06:57.788011   12253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56397
	I0906 12:06:57.788364   12253 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:06:57.788747   12253 main.go:141] libmachine: Using API Version  1
	I0906 12:06:57.788763   12253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:06:57.789004   12253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:06:57.789119   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.789216   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 12:06:57.789290   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.789388   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 12:06:57.790320   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid 10558 missing from process table
	I0906 12:06:57.790346   12253 fix.go:112] recreateIfNeeded on ha-343000-m04: state=Stopped err=<nil>
	I0906 12:06:57.790354   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	W0906 12:06:57.790423   12253 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:06:57.811236   12253 out.go:177] * Restarting existing hyperkit VM for "ha-343000-m04" ...
	I0906 12:06:57.853317   12253 main.go:141] libmachine: (ha-343000-m04) Calling .Start
	I0906 12:06:57.853695   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.853752   12253 main.go:141] libmachine: (ha-343000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid
	I0906 12:06:57.853833   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Using UUID 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5
	I0906 12:06:57.879995   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Generated MAC 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.880018   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000
	I0906 12:06:57.880162   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880191   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001201e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0906 12:06:57.880277   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machine
s/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"}
	I0906 12:06:57.880319   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0c45b4e9-6162-4e5f-8b6f-82e9c2aa82c5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/ha-343000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-343000"
	I0906 12:06:57.880330   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:06:57.881745   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 DEBUG: hyperkit: Pid is 12301
	I0906 12:06:57.882213   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Attempt 0
	I0906 12:06:57.882229   12253 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:06:57.882285   12253 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 12301
	I0906 12:06:57.884227   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Searching for 6a:d8:ba:fa:e9:e7 in /var/db/dhcpd_leases ...
	I0906 12:06:57.884329   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found 26 entries in /var/db/dhcpd_leases!
	I0906 12:06:57.884344   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:3e:84:3d:bc:9c:31 ID:1,3e:84:3d:bc:9c:31 Lease:0x66dca42d}
	I0906 12:06:57.884361   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:a2:d5:dd:3d:e9:56 ID:1,a2:d5:dd:3d:e9:56 Lease:0x66dca3fa}
	I0906 12:06:57.884375   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:e:ef:97:91:be:81 ID:1,e:ef:97:91:be:81 Lease:0x66dca3e7}
	I0906 12:06:57.884400   12253 main.go:141] libmachine: (ha-343000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:6a:d8:ba:fa:e9:e7 ID:1,6a:d8:ba:fa:e9:e7 Lease:0x66db5123}
	I0906 12:06:57.884406   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetConfigRaw
	I0906 12:06:57.884413   12253 main.go:141] libmachine: (ha-343000-m04) DBG | Found match: 6a:d8:ba:fa:e9:e7
	I0906 12:06:57.884464   12253 main.go:141] libmachine: (ha-343000-m04) DBG | IP: 192.169.0.27
	I0906 12:06:57.885084   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:06:57.885308   12253 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/ha-343000/config.json ...
	I0906 12:06:57.885947   12253 machine.go:93] provisionDockerMachine start ...
	I0906 12:06:57.885958   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:06:57.886118   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:06:57.886263   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:06:57.886401   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886518   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:06:57.886625   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:06:57.886755   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:06:57.886913   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:06:57.886920   12253 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:06:57.890225   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:06:57.898506   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:06:57.900023   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:57.900046   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:57.900059   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:57.900081   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.292623   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:06:58.292638   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:06:58.407402   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:06:58.407425   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:06:58.407438   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:06:58.407462   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:06:58.408295   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:06:58.408305   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:06:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:07:04.116677   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:07:04.116760   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:07:04.116771   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:07:04.140349   12253 main.go:141] libmachine: (ha-343000-m04) DBG | 2024/09/06 12:07:04 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:07:32.960229   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:07:32.960245   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960393   12253 buildroot.go:166] provisioning hostname "ha-343000-m04"
	I0906 12:07:32.960404   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:32.960498   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:32.960578   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:32.960651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960733   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:32.960822   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:32.960938   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:32.961089   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:32.961097   12253 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-343000-m04 && echo "ha-343000-m04" | sudo tee /etc/hostname
	I0906 12:07:33.029657   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-343000-m04
	
	I0906 12:07:33.029671   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.029803   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.029895   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.029994   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.030077   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.030212   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.030354   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.030365   12253 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-343000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-343000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-343000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:07:33.094966   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:07:33.094982   12253 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:07:33.094992   12253 buildroot.go:174] setting up certificates
	I0906 12:07:33.094999   12253 provision.go:84] configureAuth start
	I0906 12:07:33.095005   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetMachineName
	I0906 12:07:33.095148   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:33.095261   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.095345   12253 provision.go:143] copyHostCerts
	I0906 12:07:33.095383   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095445   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:07:33.095451   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:07:33.095595   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:07:33.095788   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095828   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:07:33.095833   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:07:33.095913   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:07:33.096069   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096123   12253 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:07:33.096133   12253 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:07:33.096216   12253 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:07:33.096362   12253 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.ha-343000-m04 san=[127.0.0.1 192.169.0.27 ha-343000-m04 localhost minikube]
	I0906 12:07:33.148486   12253 provision.go:177] copyRemoteCerts
	I0906 12:07:33.148536   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:07:33.148551   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.148688   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.148785   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.148886   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.148968   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:33.184847   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:07:33.184925   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:07:33.204793   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:07:33.204868   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:07:33.225189   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:07:33.225262   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 12:07:33.245047   12253 provision.go:87] duration metric: took 150.030083ms to configureAuth
	I0906 12:07:33.245064   12253 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:07:33.245233   12253 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:07:33.245264   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:33.245394   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.245474   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.245563   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245656   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.245735   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.245857   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.245998   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.246006   12253 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:07:33.305766   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:07:33.305779   12253 buildroot.go:70] root file system type: tmpfs
	I0906 12:07:33.305852   12253 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:07:33.305865   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.305998   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.306097   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306198   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.306282   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.306410   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.306555   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.306603   12253 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.24"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25"
	Environment="NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:07:33.377062   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.24
	Environment=NO_PROXY=192.169.0.24,192.169.0.25
	Environment=NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:07:33.377081   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:33.377218   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:33.377309   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377395   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:33.377470   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:33.377595   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:33.377731   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:33.377745   12253 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:07:34.969419   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:07:34.969435   12253 machine.go:96] duration metric: took 37.07976383s to provisionDockerMachine
	I0906 12:07:34.969443   12253 start.go:293] postStartSetup for "ha-343000-m04" (driver="hyperkit")
	I0906 12:07:34.969451   12253 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:07:34.969464   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:34.969653   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:07:34.969667   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:34.969755   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:34.969839   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:34.969938   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:34.970026   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.005883   12253 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:07:35.009124   12253 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:07:35.009135   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:07:35.009234   12253 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:07:35.009411   12253 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:07:35.009418   12253 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:07:35.009642   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:07:35.017147   12253 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:07:35.037468   12253 start.go:296] duration metric: took 68.014068ms for postStartSetup
	I0906 12:07:35.037488   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.037659   12253 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 12:07:35.037673   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.037762   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.037851   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.037939   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.038032   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.073675   12253 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0906 12:07:35.073738   12253 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0906 12:07:35.107246   12253 fix.go:56] duration metric: took 37.325422655s for fixHost
	I0906 12:07:35.107273   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.107423   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.107527   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107605   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.107700   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.107824   12253 main.go:141] libmachine: Using SSH client type: native
	I0906 12:07:35.107967   12253 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xcec0ea0] 0xcec3c00 <nil>  [] 0s} 192.169.0.27 22 <nil> <nil>}
	I0906 12:07:35.107979   12253 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:07:35.169429   12253 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649655.267789382
	
	I0906 12:07:35.169443   12253 fix.go:216] guest clock: 1725649655.267789382
	I0906 12:07:35.169449   12253 fix.go:229] Guest: 2024-09-06 12:07:35.267789382 -0700 PDT Remote: 2024-09-06 12:07:35.107262 -0700 PDT m=+153.317111189 (delta=160.527382ms)
	I0906 12:07:35.169466   12253 fix.go:200] guest clock delta is within tolerance: 160.527382ms
	I0906 12:07:35.169472   12253 start.go:83] releasing machines lock for "ha-343000-m04", held for 37.387671405s
	I0906 12:07:35.169494   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.169634   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 12:07:35.192021   12253 out.go:177] * Found network options:
	I0906 12:07:35.212912   12253 out.go:177]   - NO_PROXY=192.169.0.24,192.169.0.25,192.169.0.26
	W0906 12:07:35.233597   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233618   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.233628   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.233643   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234159   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234366   12253 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 12:07:35.234455   12253 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:07:35.234491   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	W0906 12:07:35.234542   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234565   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 12:07:35.234576   12253 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:07:35.234648   12253 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:07:35.234651   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234665   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 12:07:35.234826   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 12:07:35.234871   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235007   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235056   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 12:07:35.235182   12253 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 12:07:35.235206   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 12:07:35.235315   12253 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	W0906 12:07:35.268496   12253 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:07:35.268557   12253 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:07:35.318514   12253 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:07:35.318528   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.318592   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.333874   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:07:35.343295   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:07:35.352492   12253 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.352552   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:07:35.361630   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.370668   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:07:35.379741   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:07:35.389143   12253 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:07:35.398542   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:07:35.407763   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:07:35.416819   12253 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:07:35.426383   12253 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:07:35.434689   12253 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:07:35.442821   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:35.546285   12253 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:07:35.565383   12253 start.go:495] detecting cgroup driver to use...
	I0906 12:07:35.565458   12253 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:07:35.587708   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.599182   12253 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:07:35.618394   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:07:35.629619   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.640716   12253 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:07:35.663169   12253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:07:35.673665   12253 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:07:35.688883   12253 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:07:35.691747   12253 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:07:35.698972   12253 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:07:35.712809   12253 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:07:35.816741   12253 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:07:35.926943   12253 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:07:35.926972   12253 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:07:35.942083   12253 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:07:36.036699   12253 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:08:37.056745   12253 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.01976389s)
	I0906 12:08:37.056810   12253 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:08:37.092348   12253 out.go:201] 
	W0906 12:08:37.113034   12253 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:07:33 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388087675Z" level=info msg="Starting up"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.388874857Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:07:33 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:33.389448447Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.406541023Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421511237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421602459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421668995Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421705837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421880023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.421931200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422075608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422118185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422150327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422179563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422320644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.422541368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424094220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424143575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424295349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424338381Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424460558Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.424511586Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425636722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425688205Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425727379Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425760048Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425791193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.425860087Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426020444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426094135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426129732Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426167338Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426204356Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426237806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426268346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426298666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426328562Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426358230Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426389211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426418321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426456445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426487889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426516746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426546507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426578999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426618589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426715802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426750125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426780114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426818663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426851076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426879866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426909029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426949139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.426988055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427021053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427049769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427133633Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427177682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427207151Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427236043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427298115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427372740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427431600Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427611432Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427700568Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427760941Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:07:33 ha-343000-m04 dockerd[513]: time="2024-09-06T19:07:33.427803687Z" level=info msg="containerd successfully booted in 0.022207s"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.407865115Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.420336385Z" level=info msg="Loading containers: start."
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.515687290Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:07:34 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:34.987987334Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.032534306Z" level=info msg="Loading containers: done."
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.046984897Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.047174717Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066396312Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:07:35 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:35.066609197Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:07:35 ha-343000-m04 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.147371084Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:07:36 ha-343000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.149138373Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.151983630Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152081675Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:07:36 ha-343000-m04 dockerd[506]: time="2024-09-06T19:07:36.152156440Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:07:37 ha-343000-m04 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:07:37 ha-343000-m04 dockerd[1111]: time="2024-09-06T19:07:37.182746438Z" level=info msg="Starting up"
	Sep 06 19:08:37 ha-343000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:08:37 ha-343000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:08:37.113090   12253 out.go:270] * 
	W0906 12:08:37.114019   12253 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:08:37.156019   12253 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203311461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.203639509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b7ad89fb08b292cfac509e0c383de126da238700a4e5bad8ad55590054381dba/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e01343203b7a509a71640de600f467038bad7b3d1d628993d32a37ee491ef5d1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:06:12 ha-343000 cri-dockerd[1402]: time="2024-09-06T19:06:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f2f69bda625f237b44e2bc9af0e9cfd8b05e944b06149fba0d64a3e513338ba1/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607046115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607111680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607122664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.607194485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.645965722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646293720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.646498986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.648910956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664089064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664361369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664585443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:12 ha-343000 dockerd[1155]: time="2024-09-06T19:06:12.664903965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:42 ha-343000 dockerd[1148]: time="2024-09-06T19:06:42.976990703Z" level=info msg="ignoring event" container=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977534371Z" level=info msg="shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977730802Z" level=warning msg="cleaning up after shim disconnected" id=22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af namespace=moby
	Sep 06 19:06:42 ha-343000 dockerd[1155]: time="2024-09-06T19:06:42.977773534Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339610101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339689283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.339702665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:06:57 ha-343000 dockerd[1155]: time="2024-09-06T19:06:57.340050558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c1a60be55b6a1       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   f2f69bda625f2       storage-provisioner
	0e02b4bf2dbaa       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   e01343203b7a5       busybox-7dff88458-x6w7h
	22c131171f901       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   f2f69bda625f2       storage-provisioner
	803c4f073a4fa       ad83b2ca7b09e                                                                                         3 minutes ago       Running             kube-proxy                1                   b7ad89fb08b29       kube-proxy-x6pfk
	554acd0f20e32       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   a2638e4522073       coredns-6f6b679f8f-q4rhs
	c86abdd0a1a3a       12968670680f4                                                                                         3 minutes ago       Running             kindnet-cni               1                   b2c6d9f178680       kindnet-tj4jx
	d15c1bf38706e       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   9e798ad091c8d       coredns-6f6b679f8f-99jtt
	890baa8f92fc8       045733566833c                                                                                         3 minutes ago       Running             kube-controller-manager   6                   26308c7f15e49       kube-controller-manager-ha-343000
	9ca63a507d338       604f5db92eaa8                                                                                         4 minutes ago       Running             kube-apiserver            6                   70de0991ef26f       kube-apiserver-ha-343000
	5f2ecf46dbad7       38af8ddebf499                                                                                         4 minutes ago       Running             kube-vip                  1                   1804cca78c5d0       kube-vip-ha-343000
	4d2f47c39f165       1766f54c897f0                                                                                         4 minutes ago       Running             kube-scheduler            2                   df0b4d2f0d771       kube-scheduler-ha-343000
	592c214e97d5c       604f5db92eaa8                                                                                         4 minutes ago       Exited              kube-apiserver            5                   70de0991ef26f       kube-apiserver-ha-343000
	8bdc400b3db6d       2e96e5913fc06                                                                                         4 minutes ago       Running             etcd                      2                   83808e05f091c       etcd-ha-343000
	5cc4eed8c219e       045733566833c                                                                                         4 minutes ago       Exited              kube-controller-manager   5                   26308c7f15e49       kube-controller-manager-ha-343000
	4066393d7e7ae       38af8ddebf499                                                                                         9 minutes ago       Exited              kube-vip                  0                   6a05e2d25f30e       kube-vip-ha-343000
	9b99b2f8d6eda       1766f54c897f0                                                                                         9 minutes ago       Exited              kube-scheduler            1                   920b387c38cf9       kube-scheduler-ha-343000
	11af4dafae646       2e96e5913fc06                                                                                         9 minutes ago       Exited              etcd                      1                   c94f15fec6f2c       etcd-ha-343000
	126eb18521cb6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Exited              busybox                   0                   2dc504f501783       busybox-7dff88458-x6w7h
	34d5a9fcc1387       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   80fa6178f69f4       coredns-6f6b679f8f-99jtt
	931a9cafdfafa       cbb01a7bd410d                                                                                         14 minutes ago      Exited              coredns                   0                   7b9ebf456874a       coredns-6f6b679f8f-q4rhs
	9e6763d81a899       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              14 minutes ago      Exited              kindnet-cni               0                   c552ca6da226c       kindnet-tj4jx
	9ab0b6ac90ac6       ad83b2ca7b09e                                                                                         14 minutes ago      Exited              kube-proxy                0                   3b385975c32bf       kube-proxy-x6pfk
	
	
	==> coredns [34d5a9fcc138] <==
	[INFO] 10.244.2.2:58789 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120754s
	[INFO] 10.244.2.2:43811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080086s
	[INFO] 10.244.1.2:37705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094111s
	[INFO] 10.244.1.2:51020 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101921s
	[INFO] 10.244.1.2:35595 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128009s
	[INFO] 10.244.1.2:37466 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081653s
	[INFO] 10.244.1.2:44316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092754s
	[INFO] 10.244.0.4:46178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007817s
	[INFO] 10.244.0.4:45010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093888s
	[INFO] 10.244.0.4:53754 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054541s
	[INFO] 10.244.0.4:50908 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074295s
	[INFO] 10.244.0.4:40350 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117915s
	[INFO] 10.244.2.2:46721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198726s
	[INFO] 10.244.2.2:49403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105805s
	[INFO] 10.244.2.2:38196 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015881s
	[INFO] 10.244.1.2:40271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009061s
	[INFO] 10.244.1.2:58192 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123353s
	[INFO] 10.244.1.2:58287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102796s
	[INFO] 10.244.2.2:60545 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120865s
	[INFO] 10.244.1.2:58192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108489s
	[INFO] 10.244.0.4:46772 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135939s
	[INFO] 10.244.0.4:57982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000032936s
	[INFO] 10.244.0.4:40948 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000121558s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [554acd0f20e3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37373 - 8840 "HINFO IN 6495643642992279060.3361092094518909540. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011184519s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[237904971]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.794) (total time: 30004ms):
	Trace[237904971]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (19:06:42.797)
	Trace[237904971]: [30.004464183s] [30.004464183s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[660143257]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.798) (total time: 30000ms):
	Trace[660143257]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:06:42.799)
	Trace[660143257]: [30.000893558s] [30.000893558s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[380072670]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30007ms):
	Trace[380072670]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.797)
	Trace[380072670]: [30.007427279s] [30.007427279s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [931a9cafdfaf] <==
	[INFO] 10.244.2.2:47871 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092349s
	[INFO] 10.244.2.2:36751 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154655s
	[INFO] 10.244.2.2:35765 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113227s
	[INFO] 10.244.2.2:34953 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000189846s
	[INFO] 10.244.1.2:37377 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000779385s
	[INFO] 10.244.1.2:36374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000523293s
	[INFO] 10.244.1.2:47415 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043613s
	[INFO] 10.244.0.4:56645 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006213s
	[INFO] 10.244.0.4:51009 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096214s
	[INFO] 10.244.0.4:41355 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183012s
	[INFO] 10.244.2.2:50655 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138209s
	[INFO] 10.244.1.2:38832 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167262s
	[INFO] 10.244.0.4:46148 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117645s
	[INFO] 10.244.0.4:43019 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107376s
	[INFO] 10.244.0.4:57161 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000028792s
	[INFO] 10.244.0.4:42860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034502s
	[INFO] 10.244.2.2:36830 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089883s
	[INFO] 10.244.2.2:47924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141909s
	[INFO] 10.244.2.2:47506 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000097095s
	[INFO] 10.244.1.2:49209 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011143s
	[INFO] 10.244.1.2:36137 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100085s
	[INFO] 10.244.1.2:47199 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000096821s
	[INFO] 10.244.0.4:43720 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040385s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d15c1bf38706] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54176 - 21158 "HINFO IN 3457232632200313932.3905864345721771129. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010437248s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1587501409]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.793) (total time: 30005ms):
	Trace[1587501409]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (19:06:42.798)
	Trace[1587501409]: [30.005577706s] [30.005577706s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[680749614]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.792) (total time: 30005ms):
	Trace[680749614]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (19:06:42.798)
	Trace[680749614]: [30.005762488s] [30.005762488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1474873071]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:06:12.799) (total time: 30001ms):
	Trace[1474873071]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:06:42.800)
	Trace[1474873071]: [30.001544995s] [30.001544995s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-343000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T11_55_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:55:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:57 +0000   Fri, 06 Sep 2024 18:55:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.24
	  Hostname:    ha-343000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 6523db55e885482e8ac62c2082b7e4e8
	  System UUID:                36fe47a6-0000-0000-a226-e026237c9096
	  Boot ID:                    a6ec27d4-119e-4645-b472-4cbf4d3b3af4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6w7h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-99jtt             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-q4rhs             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-343000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-tj4jx                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-343000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-343000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-x6pfk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-343000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-343000                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     15m                    kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m                    kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  15m                    kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           15m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-343000 status is now: NodeReady
	  Normal  RegisteredNode           13m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m51s)  kubelet          Node ha-343000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m51s)  kubelet          Node ha-343000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m51s (x7 over 4m51s)  kubelet          Node ha-343000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	  Normal  RegisteredNode           29s                    node-controller  Node ha-343000 event: Registered Node ha-343000 in Controller
	
	
	Name:               ha-343000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_56_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:05:58 +0000   Fri, 06 Sep 2024 18:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.25
	  Hostname:    ha-343000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 01c58e04d4304f6f9c11ce89f0bbf71d
	  System UUID:                2c7446f3-0000-0000-9664-55c72aec5dea
	  Boot ID:                    d9c8abd7-e4ec-46d0-892f-bd1bfa22eaef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jk74s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-343000-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-5rtpx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-343000-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-343000-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-zjx8z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-343000-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-343000-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m1s                   kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Warning  Rebooted                 10m                    kubelet          Node ha-343000-m02 has been rebooted, boot id: 9a70d273-2199-426f-b35f-a9b4075cc0d7
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m31s (x8 over 4m31s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m31s (x8 over 4m31s)  kubelet          Node ha-343000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m31s (x7 over 4m31s)  kubelet          Node ha-343000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           3m58s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           3m34s                  node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	  Normal   RegisteredNode           29s                    node-controller  Node ha-343000-m02 event: Registered Node ha-343000-m02 in Controller
	
	
	Name:               ha-343000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_57_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:57:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:06:30 +0000   Fri, 06 Sep 2024 18:57:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.26
	  Hostname:    ha-343000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 da881992752a4b679c6a5b2a9f0cdfbb
	  System UUID:                5abf4f35-0000-0000-b6fc-c88bfc629e81
	  Boot ID:                    1683487f-47c5-465d-9b2b-74dea29e28d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2kj2b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-343000-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-ksnvp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-343000-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-343000-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-r285j                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-343000-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-343000-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m37s              kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           4m19s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           3m58s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   Starting                 3m41s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m41s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m41s              kubelet          Node ha-343000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m41s              kubelet          Node ha-343000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m41s              kubelet          Node ha-343000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m41s              kubelet          Node ha-343000-m03 has been rebooted, boot id: 1683487f-47c5-465d-9b2b-74dea29e28d4
	  Normal   RegisteredNode           3m34s              node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	  Normal   RegisteredNode           29s                node-controller  Node ha-343000-m03 event: Registered Node ha-343000-m03 in Controller
	
	
	Name:               ha-343000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T11_58_13_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:58:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:59:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:58:43 +0000   Fri, 06 Sep 2024 19:06:32 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.27
	  Hostname:    ha-343000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 25099ec69db34e82bcd2f07d22b80010
	  System UUID:                0c454e5f-0000-0000-8b6f-82e9c2aa82c5
	  Boot ID:                    b76c6143-1924-46d7-b754-0208a6d7ff29
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9rf4h       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-proxy-8hww6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-343000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-343000-m04 status is now: NodeHasNoDiskPressure
	  Normal  CIDRAssignmentFailed     11m                cidrAllocator    Node ha-343000-m04 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           11m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeReady                11m                kubelet          Node ha-343000-m04 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           4m19s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           3m58s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  NodeNotReady             3m39s              node-controller  Node ha-343000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m34s              node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-343000-m04 event: Registered Node ha-343000-m04 in Controller
	
	
	Name:               ha-343000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-343000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-343000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T12_09_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:09:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-343000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:10:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:10:05 +0000   Fri, 06 Sep 2024 19:09:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.28
	  Hostname:    ha-343000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 c35084f0124f46c6bde88f6228c41b21
	  System UUID:                b7ce4581-0000-0000-b100-3eaa6ce1c90b
	  Boot ID:                    1be229f7-63d3-4854-87de-793aff331e2a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-343000-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         35s
	  kube-system                 kindnet-f4mts                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      37s
	  kube-system                 kube-apiserver-ha-343000-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-ha-343000-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-7xrbs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-ha-343000-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-vip-ha-343000-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 33s                kube-proxy       
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s (x8 over 38s)  kubelet          Node ha-343000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 38s)  kubelet          Node ha-343000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 38s)  kubelet          Node ha-343000-m05 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	  Normal  RegisteredNode           34s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	  Normal  RegisteredNode           33s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-343000-m05 event: Registered Node ha-343000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.036474] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008025] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.716498] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006721] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.833567] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.343017] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.247177] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.103204] systemd-fstab-generator[483]: Ignoring "noauto" option for root device
	[  +1.994098] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +0.255819] systemd-fstab-generator[1114]: Ignoring "noauto" option for root device
	[  +0.098656] systemd-fstab-generator[1126]: Ignoring "noauto" option for root device
	[  +0.058515] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.064719] systemd-fstab-generator[1140]: Ignoring "noauto" option for root device
	[  +2.463494] systemd-fstab-generator[1355]: Ignoring "noauto" option for root device
	[  +0.126800] systemd-fstab-generator[1367]: Ignoring "noauto" option for root device
	[  +0.101663] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.133711] systemd-fstab-generator[1394]: Ignoring "noauto" option for root device
	[  +0.457617] systemd-fstab-generator[1554]: Ignoring "noauto" option for root device
	[  +6.844240] kauditd_printk_skb: 190 callbacks suppressed
	[ +21.300680] kauditd_printk_skb: 40 callbacks suppressed
	[Sep 6 19:06] kauditd_printk_skb: 83 callbacks suppressed
	
	
	==> etcd [11af4dafae64] <==
	{"level":"warn","ts":"2024-09-06T19:04:56.004501Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:04:56.510489Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3454984155381402166,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:04:56.955363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:56.955429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982261Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-06T19:04:56.982469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.000937137s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-06T19:04:56.982656Z","caller":"traceutil/trace.go:171","msg":"trace[219101750] range","detail":"{range_begin:; range_end:; }","duration":"7.001140659s","start":"2024-09-06T19:04:49.981500Z","end":"2024-09-06T19:04:56.982641Z","steps":["trace[219101750] 'agreement among raft nodes before linearized reading'  (duration: 7.000934405s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:04:56.982940Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:04:58.256456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:58.256589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839480Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.839529Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"637242e03e6dd2d1","rtt":"0s","error":"dial tcp 192.169.0.25:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842271Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-06T19:04:58.842292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: i/o timeout"}
	{"level":"info","ts":"2024-09-06T19:04:59.555087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 received MsgPreVoteResp from 6dbe4340aa302ff2 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 637242e03e6dd2d1 at term 2"}
	{"level":"info","ts":"2024-09-06T19:04:59.555139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 [logterm: 2, index: 1996] sent MsgPreVote request to 6a6e0aa498652645 at term 2"}
	
	
	==> etcd [8bdc400b3db6] <==
	{"level":"info","ts":"2024-09-06T19:06:32.447583Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"info","ts":"2024-09-06T19:06:32.448798Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"6a6e0aa498652645","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:06:32.448838Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"6a6e0aa498652645"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482173Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:06:32.482231Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6a6e0aa498652645","rtt":"0s","error":"dial tcp 192.169.0.26:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-06T19:09:34.382036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 switched to configuration voters=(7165863487987372753 7669078917506213445 7907831940721422322) learners=(17537987181276551891)"}
	{"level":"info","ts":"2024-09-06T19:09:34.382516Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e6f1c753fc4a3cb","local-member-id":"6dbe4340aa302ff2","added-peer-id":"f363726bcf57dad3","added-peer-peer-urls":["https://192.169.0.28:2380"]}
	{"level":"info","ts":"2024-09-06T19:09:34.382681Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.382753Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.383131Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.383327Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.384392Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.384608Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3","remote-peer-urls":["https://192.169.0.28:2380"]}
	{"level":"info","ts":"2024-09-06T19:09:34.384885Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.385070Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"6dbe4340aa302ff2","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:34.385746Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.808975Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.809178Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.817626Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.858169Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"f363726bcf57dad3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-06T19:09:35.858364Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:35.870254Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6dbe4340aa302ff2","to":"f363726bcf57dad3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:09:35.870298Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6dbe4340aa302ff2","remote-peer-id":"f363726bcf57dad3"}
	{"level":"info","ts":"2024-09-06T19:09:36.944608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6dbe4340aa302ff2 switched to configuration voters=(7165863487987372753 7669078917506213445 7907831940721422322 17537987181276551891)"}
	{"level":"info","ts":"2024-09-06T19:09:36.944792Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e6f1c753fc4a3cb","local-member-id":"6dbe4340aa302ff2"}
	
	
	==> kernel <==
	 19:10:12 up 5 min,  0 users,  load average: 0.22, 0.24, 0.11
	Linux ha-343000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9e6763d81a89] <==
	I0906 18:59:27.723199       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727295       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:37.727338       1 main.go:299] handling current node
	I0906 18:59:37.727349       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:37.727353       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:37.727428       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:37.727453       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:37.727489       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:37.727513       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:47.728363       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:47.728518       1 main.go:299] handling current node
	I0906 18:59:47.728633       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:47.728739       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:47.728918       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:47.728997       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:47.729121       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:47.729229       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 18:59:57.722632       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 18:59:57.722671       1 main.go:299] handling current node
	I0906 18:59:57.722682       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 18:59:57.722686       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 18:59:57.722937       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 18:59:57.722967       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 18:59:57.723092       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 18:59:57.723199       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c86abdd0a1a3] <==
	I0906 19:09:43.506789       1 main.go:299] handling current node
	I0906 19:09:43.506800       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:09:43.506805       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:09:43.506939       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:09:43.507012       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:09:53.503008       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:09:53.503569       1 main.go:299] handling current node
	I0906 19:09:53.503905       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:09:53.504067       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:09:53.504245       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:09:53.504379       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:09:53.504554       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:09:53.504700       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:09:53.504831       1 main.go:295] Handling node with IPs: map[192.169.0.28:{}]
	I0906 19:09:53.504946       1 main.go:322] Node ha-343000-m05 has CIDR [10.244.4.0/24] 
	I0906 19:10:03.506756       1 main.go:295] Handling node with IPs: map[192.169.0.26:{}]
	I0906 19:10:03.506987       1 main.go:322] Node ha-343000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:10:03.507271       1 main.go:295] Handling node with IPs: map[192.169.0.27:{}]
	I0906 19:10:03.507377       1 main.go:322] Node ha-343000-m04 has CIDR [10.244.3.0/24] 
	I0906 19:10:03.507512       1 main.go:295] Handling node with IPs: map[192.169.0.28:{}]
	I0906 19:10:03.507634       1 main.go:322] Node ha-343000-m05 has CIDR [10.244.4.0/24] 
	I0906 19:10:03.507759       1 main.go:295] Handling node with IPs: map[192.169.0.24:{}]
	I0906 19:10:03.507874       1 main.go:299] handling current node
	I0906 19:10:03.507924       1 main.go:295] Handling node with IPs: map[192.169.0.25:{}]
	I0906 19:10:03.508028       1 main.go:322] Node ha-343000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [592c214e97d5] <==
	I0906 19:05:27.461896       1 options.go:228] external host was not specified, using 192.169.0.24
	I0906 19:05:27.465176       1 server.go:142] Version: v1.31.0
	I0906 19:05:27.465213       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.107777       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:05:28.107810       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:05:28.107883       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:05:28.108002       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:05:28.108375       1 instance.go:232] Using reconciler: lease
	W0906 19:05:48.100071       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:05:48.101622       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0906 19:05:48.109302       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [9ca63a507d33] <==
	I0906 19:06:00.319954       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0906 19:06:00.329227       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0906 19:06:00.389615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:06:00.399153       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:06:00.399318       1 policy_source.go:224] refreshing policies
	I0906 19:06:00.418950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:06:00.418975       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:06:00.419196       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:06:00.421841       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:06:00.423174       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:06:00.423547       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:06:00.423580       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:06:00.423586       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:06:00.423589       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:06:00.423592       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:06:00.424202       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:06:00.424372       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:06:00.429383       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	W0906 19:06:00.444807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.25]
	I0906 19:06:00.446706       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:06:00.460452       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0906 19:06:00.463465       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0906 19:06:00.488387       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:06:01.327320       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:06:01.574034       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.24 192.169.0.25]
	
	
	==> kube-controller-manager [5cc4eed8c219] <==
	I0906 19:05:28.174269       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:05:28.573887       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:05:28.573928       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:05:28.585160       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:05:28.585380       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:05:28.585888       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:05:28.586027       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:05:49.113760       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.24:8443/healthz\": dial tcp 192.169.0.24:8443: connect: connection refused"
	
	
	==> kube-controller-manager [890baa8f92fc] <==
	I0906 19:09:34.213167       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-343000-m05" podCIDRs=["10.244.4.0/24"]
	I0906 19:09:34.213624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:34.213692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:34.235668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:34.288789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	E0906 19:09:34.364650       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"d4bfe8d6-d130-47f9-a49c-d1349255746b\", ResourceVersion:\"2070\", Generation:1, CreationTimestamp:time.Date(2024, time.September, 6, 18, 55, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b9cea0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\
", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002961e80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024737d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolum
eSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVo
lumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0024737e8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtua
lDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"registry.k8s.io/kube-proxy:v1.31.0\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001b9cee0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.Res
ourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\
"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0027e7200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc002a272c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00295d580), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Host
Alias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002a63450)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a27320)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:3, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfille
d on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0906 19:09:37.402652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.042560       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.052836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.110172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.160559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.229041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.608683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:38.609244       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-343000-m05"
	I0906 19:09:38.701224       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:42.231946       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:09:42.242032       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:42.324133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m04"
	I0906 19:09:44.477813       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:48.122052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:52.422627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:56.045182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:56.072455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:09:57.266707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	I0906 19:10:05.182527       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-343000-m05"
	
	
	==> kube-proxy [803c4f073a4f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:06:13.148913       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:06:13.172780       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 19:06:13.173030       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:06:13.214090       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:06:13.214133       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:06:13.214154       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:06:13.217530       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:06:13.218331       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:06:13.218361       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:06:13.222797       1 config.go:197] "Starting service config controller"
	I0906 19:06:13.222930       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:06:13.223035       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:06:13.223104       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:06:13.225748       1 config.go:326] "Starting node config controller"
	I0906 19:06:13.225874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:06:13.323124       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:06:13.324280       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:06:13.326187       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9ab0b6ac90ac] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:55:13.194683       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:55:13.204778       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.24"]
	E0906 18:55:13.204815       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:55:13.260675       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:55:13.260697       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:55:13.260715       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:55:13.267079       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:55:13.267303       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:55:13.267312       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:55:13.269494       1 config.go:197] "Starting service config controller"
	I0906 18:55:13.269521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:55:13.269531       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:55:13.269534       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:55:13.269766       1 config.go:326] "Starting node config controller"
	I0906 18:55:13.269792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:55:13.371232       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:55:13.371252       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:55:13.371258       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4d2f47c39f16] <==
	W0906 19:05:58.391628       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.391680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.574460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.574508       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:05:58.613456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:05:58.613730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	I0906 19:06:06.337934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:09:34.254444       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hbcb4\": pod kindnet-hbcb4 is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-hbcb4" node="ha-343000-m05"
	E0906 19:09:34.254918       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hbcb4\": pod kindnet-hbcb4 is already assigned to node \"ha-343000-m05\"" pod="kube-system/kindnet-hbcb4"
	E0906 19:09:34.255088       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sdxss\": pod kube-proxy-sdxss is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sdxss" node="ha-343000-m05"
	E0906 19:09:34.255340       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sdxss\": pod kube-proxy-sdxss is already assigned to node \"ha-343000-m05\"" pod="kube-system/kube-proxy-sdxss"
	E0906 19:09:34.273658       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wtphc\": pod kube-proxy-wtphc is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wtphc" node="ha-343000-m05"
	E0906 19:09:34.273753       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wtphc\": pod kube-proxy-wtphc is already assigned to node \"ha-343000-m05\"" pod="kube-system/kube-proxy-wtphc"
	E0906 19:09:34.289514       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-f4mts\": pod kindnet-f4mts is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-f4mts" node="ha-343000-m05"
	E0906 19:09:34.289982       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bcac08d6-b75f-4fbb-a399-2c77e9b2e57d(kube-system/kindnet-f4mts) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-f4mts"
	E0906 19:09:34.290020       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-f4mts\": pod kindnet-f4mts is already assigned to node \"ha-343000-m05\"" pod="kube-system/kindnet-f4mts"
	I0906 19:09:34.290350       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-f4mts" node="ha-343000-m05"
	E0906 19:09:34.317789       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sbspk\": pod kindnet-sbspk is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-sbspk" node="ha-343000-m05"
	E0906 19:09:34.317849       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2c2e6dc3-8631-4375-8f25-517f8c32c39b(kube-system/kindnet-sbspk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sbspk"
	E0906 19:09:34.317863       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sbspk\": pod kindnet-sbspk is already assigned to node \"ha-343000-m05\"" pod="kube-system/kindnet-sbspk"
	I0906 19:09:34.317875       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sbspk" node="ha-343000-m05"
	E0906 19:09:34.346836       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7xrbs\": pod kube-proxy-7xrbs is already assigned to node \"ha-343000-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7xrbs" node="ha-343000-m05"
	E0906 19:09:34.346894       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1de2379e-e9ef-4da3-915f-dc0986a6129e(kube-system/kube-proxy-7xrbs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7xrbs"
	E0906 19:09:34.346911       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7xrbs\": pod kube-proxy-7xrbs is already assigned to node \"ha-343000-m05\"" pod="kube-system/kube-proxy-7xrbs"
	I0906 19:09:34.346924       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7xrbs" node="ha-343000-m05"
	
	
	==> kube-scheduler [9b99b2f8d6ed] <==
	W0906 19:04:31.417232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.417325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:31.755428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:31.755742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:35.986154       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:35.986279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.169.0.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.066579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.066654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.169.0.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.563029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.563228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.169.0.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:40.748870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:40.749078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.169.0.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:45.521553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:45.521675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.169.0.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:47.041120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:47.041443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.169.0.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:52.540182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:52.540432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.169.0.24:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:04:54.069445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.24:8443: connect: connection refused
	E0906 19:04:54.069585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.169.0.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.169.0.24:8443: connect: connection refused" logger="UnhandledError"
	E0906 19:04:59.711524       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	I0906 19:04:59.712006       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0906 19:04:59.712120       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:04:59.712142       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	E0906 19:04:59.712922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 06 19:06:20 ha-343000 kubelet[1561]: E0906 19:06:20.331039    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:06:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:06:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:06:20 ha-343000 kubelet[1561]: I0906 19:06:20.393885    1561 scope.go:117] "RemoveContainer" containerID="b3713b7090d8f8af511e66546413a97f331dea488be8efe378a26980838f7cf4"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211095    1561 scope.go:117] "RemoveContainer" containerID="051e748db656a81282f4811bb15ed42555514a115306dfa611e2c0d2af72e345"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: I0906 19:06:43.211309    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:06:43 ha-343000 kubelet[1561]: E0906 19:06:43.211390    1561 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9815f44c-20e3-4243-8eb4-60cd42a850ad)\"" pod="kube-system/storage-provisioner" podUID="9815f44c-20e3-4243-8eb4-60cd42a850ad"
	Sep 06 19:06:57 ha-343000 kubelet[1561]: I0906 19:06:57.289715    1561 scope.go:117] "RemoveContainer" containerID="22c131171f901dae7b3e44faab00b8d8ebe428688e0a07de6a5b806ee5fa76af"
	Sep 06 19:07:20 ha-343000 kubelet[1561]: E0906 19:07:20.331091    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:07:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:07:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:08:20 ha-343000 kubelet[1561]: E0906 19:08:20.333049    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:08:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:08:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:09:20 ha-343000 kubelet[1561]: E0906 19:09:20.331561    1561 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:09:20 ha-343000 kubelet[1561]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:09:20 ha-343000 kubelet[1561]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:09:20 ha-343000 kubelet[1561]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:09:20 ha-343000 kubelet[1561]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-343000 -n ha-343000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-343000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-075000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0906 12:14:36.524695    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-075000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.813018533s)

                                                
                                                
-- stdout --
	* [mount-start-1-075000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-075000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-075000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 8a:9c:57:76:1f:95
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-075000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 16:d8:76:5:cb:5f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 16:d8:76:5:cb:5f
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-075000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-075000 -n mount-start-1-075000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-075000 -n mount-start-1-075000: exit status 7 (80.745401ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:16:12.040611   12725 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:16:12.040635   12725 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-075000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (205.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-459000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-459000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-459000: (18.82339699s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-459000 --wait=true -v=8 --alsologtostderr
E0906 12:21:02.603035    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:22:59.528778    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-459000 --wait=true -v=8 --alsologtostderr: exit status 90 (3m2.95189923s)

                                                
                                                
-- stdout --
	* [multinode-459000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-459000" primary control-plane node in "multinode-459000" cluster
	* Restarting existing hyperkit VM for "multinode-459000" ...
	* Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-459000-m02" worker node in "multinode-459000" cluster
	* Restarting existing hyperkit VM for "multinode-459000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.33
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:20:04.345863   13103 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:04.346053   13103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:04.346060   13103 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:04.346064   13103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:04.346235   13103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:20:04.347624   13103 out.go:352] Setting JSON to false
	I0906 12:20:04.372597   13103 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11975,"bootTime":1725638429,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:20:04.372699   13103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:20:04.394472   13103 out.go:177] * [multinode-459000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:20:04.436211   13103 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:20:04.436276   13103 notify.go:220] Checking for updates...
	I0906 12:20:04.478971   13103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:04.499819   13103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:20:04.521129   13103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:20:04.542343   13103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:20:04.563008   13103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:20:04.584955   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:04.585128   13103 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:20:04.585775   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.585861   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:20:04.595482   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57485
	I0906 12:20:04.595845   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:20:04.596336   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:20:04.596353   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:20:04.596616   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:20:04.596748   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.625251   13103 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:20:04.667227   13103 start.go:297] selected driver: hyperkit
	I0906 12:20:04.667254   13103 start.go:901] validating driver "hyperkit" against &{Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-4590
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:04.667526   13103 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:20:04.667707   13103 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:20:04.667925   13103 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:20:04.677596   13103 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:20:04.681720   13103 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.681741   13103 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:20:04.684904   13103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:20:04.684944   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:04.684957   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:04.685037   13103 start.go:340] cluster config:
	{Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logview
er:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:04.685143   13103 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:20:04.727037   13103 out.go:177] * Starting "multinode-459000" primary control-plane node in "multinode-459000" cluster
	I0906 12:20:04.748083   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:20:04.748146   13103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:20:04.748175   13103 cache.go:56] Caching tarball of preloaded images
	I0906 12:20:04.748360   13103 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:20:04.748383   13103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:20:04.748522   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:20:04.749240   13103 start.go:360] acquireMachinesLock for multinode-459000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:20:04.749328   13103 start.go:364] duration metric: took 55.823µs to acquireMachinesLock for "multinode-459000"
	I0906 12:20:04.749345   13103 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:20:04.749357   13103 fix.go:54] fixHost starting: 
	I0906 12:20:04.749579   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.749598   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:20:04.758425   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57487
	I0906 12:20:04.758777   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:20:04.759147   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:20:04.759162   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:20:04.759382   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:20:04.759508   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.759613   13103 main.go:141] libmachine: (multinode-459000) Calling .GetState
	I0906 12:20:04.759719   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.759791   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 12754
	I0906 12:20:04.760733   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 12754 missing from process table
	I0906 12:20:04.760763   13103 fix.go:112] recreateIfNeeded on multinode-459000: state=Stopped err=<nil>
	I0906 12:20:04.760785   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	W0906 12:20:04.760890   13103 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:20:04.802907   13103 out.go:177] * Restarting existing hyperkit VM for "multinode-459000" ...
	I0906 12:20:04.824117   13103 main.go:141] libmachine: (multinode-459000) Calling .Start
	I0906 12:20:04.824350   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.824408   13103 main.go:141] libmachine: (multinode-459000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid
	I0906 12:20:04.826541   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 12754 missing from process table
	I0906 12:20:04.826557   13103 main.go:141] libmachine: (multinode-459000) DBG | pid 12754 is in state "Stopped"
	I0906 12:20:04.826571   13103 main.go:141] libmachine: (multinode-459000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid...
	I0906 12:20:04.827002   13103 main.go:141] libmachine: (multinode-459000) DBG | Using UUID 01eb6722-41be-4f7c-b53d-2237e8e3c176
	I0906 12:20:04.935555   13103 main.go:141] libmachine: (multinode-459000) DBG | Generated MAC 3a:dc:bb:38:e3:28
	I0906 12:20:04.935584   13103 main.go:141] libmachine: (multinode-459000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000
	I0906 12:20:04.935690   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"01eb6722-41be-4f7c-b53d-2237e8e3c176", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0906 12:20:04.935723   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"01eb6722-41be-4f7c-b53d-2237e8e3c176", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0906 12:20:04.935758   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "01eb6722-41be-4f7c-b53d-2237e8e3c176", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/multinode-459000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage,/Users/jenkins/minikube-integration/1957
6-7784/.minikube/machines/multinode-459000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"}
	I0906 12:20:04.935794   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 01eb6722-41be-4f7c-b53d-2237e8e3c176 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/multinode-459000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"
	I0906 12:20:04.935811   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:20:04.937295   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Pid is 13116
	I0906 12:20:04.937708   13103 main.go:141] libmachine: (multinode-459000) DBG | Attempt 0
	I0906 12:20:04.937719   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.937806   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 13116
	I0906 12:20:04.939357   13103 main.go:141] libmachine: (multinode-459000) DBG | Searching for 3a:dc:bb:38:e3:28 in /var/db/dhcpd_leases ...
	I0906 12:20:04.939446   13103 main.go:141] libmachine: (multinode-459000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0906 12:20:04.939476   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:20:04.939495   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca6c9}
	I0906 12:20:04.939523   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca68b}
	I0906 12:20:04.939530   13103 main.go:141] libmachine: (multinode-459000) DBG | Found match: 3a:dc:bb:38:e3:28
	I0906 12:20:04.939550   13103 main.go:141] libmachine: (multinode-459000) DBG | IP: 192.169.0.33
	I0906 12:20:04.939615   13103 main.go:141] libmachine: (multinode-459000) Calling .GetConfigRaw
	I0906 12:20:04.940318   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:04.940491   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:20:04.940980   13103 machine.go:93] provisionDockerMachine start ...
	I0906 12:20:04.940993   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.941161   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:04.941289   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:04.941397   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:04.941519   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:04.941644   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:04.941784   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:04.941989   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:04.941997   13103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:20:04.945527   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:20:04.997276   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:20:04.997987   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:20:04.998001   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:20:04.998009   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:20:04.998017   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:20:05.390023   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:20:05.390038   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:20:05.504740   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:20:05.504761   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:20:05.504773   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:20:05.504793   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:20:05.505682   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:20:05.505706   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:20:11.126600   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:20:11.126629   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:20:11.126642   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:20:11.150792   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:20:40.017036   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:20:40.017050   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.017188   13103 buildroot.go:166] provisioning hostname "multinode-459000"
	I0906 12:20:40.017198   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.017332   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.017423   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.017512   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.017602   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.017716   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.017845   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.017999   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.018007   13103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-459000 && echo "multinode-459000" | sudo tee /etc/hostname
	I0906 12:20:40.096089   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-459000
	
	I0906 12:20:40.096107   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.096242   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.096342   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.096426   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.096502   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.096618   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.096770   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.096781   13103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-459000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-459000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-459000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:20:40.169206   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:20:40.169225   13103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:20:40.169241   13103 buildroot.go:174] setting up certificates
	I0906 12:20:40.169250   13103 provision.go:84] configureAuth start
	I0906 12:20:40.169257   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.169406   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:40.169492   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.169576   13103 provision.go:143] copyHostCerts
	I0906 12:20:40.169605   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:20:40.169676   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:20:40.169683   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:20:40.170064   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:20:40.170273   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:20:40.170315   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:20:40.170320   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:20:40.170402   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:20:40.170550   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:20:40.170592   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:20:40.170597   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:20:40.170676   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:20:40.170820   13103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.multinode-459000 san=[127.0.0.1 192.169.0.33 localhost minikube multinode-459000]
	I0906 12:20:40.232666   13103 provision.go:177] copyRemoteCerts
	I0906 12:20:40.232717   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:20:40.232731   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.232854   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.232974   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.233068   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.233156   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:40.274812   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:20:40.274888   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:20:40.293995   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:20:40.294068   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:20:40.313187   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:20:40.313258   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 12:20:40.332207   13103 provision.go:87] duration metric: took 162.943562ms to configureAuth
	I0906 12:20:40.332219   13103 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:20:40.332387   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:40.332402   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:40.332534   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.332628   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.332709   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.332780   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.332850   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.332965   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.333093   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.333100   13103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:20:40.400358   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:20:40.400369   13103 buildroot.go:70] root file system type: tmpfs
	I0906 12:20:40.400464   13103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:20:40.400477   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.400616   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.400716   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.400806   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.400897   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.401035   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.401181   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.401224   13103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:20:40.478937   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:20:40.478956   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.479091   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.479178   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.479269   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.479347   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.479476   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.479629   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.479640   13103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:20:42.127114   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:20:42.127129   13103 machine.go:96] duration metric: took 37.186310804s to provisionDockerMachine
	I0906 12:20:42.127143   13103 start.go:293] postStartSetup for "multinode-459000" (driver="hyperkit")
	I0906 12:20:42.127150   13103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:20:42.127165   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.127347   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:20:42.127361   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.127444   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.127542   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.127636   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.127724   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.166901   13103 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:20:42.169868   13103 command_runner.go:130] > NAME=Buildroot
	I0906 12:20:42.169887   13103 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 12:20:42.169893   13103 command_runner.go:130] > ID=buildroot
	I0906 12:20:42.169899   13103 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 12:20:42.169908   13103 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 12:20:42.170001   13103 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:20:42.170014   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:20:42.170122   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:20:42.170312   13103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:20:42.170318   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:20:42.170518   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:20:42.178002   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:20:42.197689   13103 start.go:296] duration metric: took 70.537804ms for postStartSetup
	I0906 12:20:42.197709   13103 fix.go:56] duration metric: took 37.448529222s for fixHost
	I0906 12:20:42.197720   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.197863   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.197977   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.198074   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.198146   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.198279   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:42.198417   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:42.198424   13103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:20:42.262511   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650442.394217988
	
	I0906 12:20:42.262522   13103 fix.go:216] guest clock: 1725650442.394217988
	I0906 12:20:42.262528   13103 fix.go:229] Guest: 2024-09-06 12:20:42.394217988 -0700 PDT Remote: 2024-09-06 12:20:42.197712 -0700 PDT m=+37.888180409 (delta=196.505988ms)
	I0906 12:20:42.262551   13103 fix.go:200] guest clock delta is within tolerance: 196.505988ms
	I0906 12:20:42.262555   13103 start.go:83] releasing machines lock for "multinode-459000", held for 37.513393533s
	I0906 12:20:42.262575   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.262704   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:42.262819   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263209   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263322   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263421   13103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:20:42.263463   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.263466   13103 ssh_runner.go:195] Run: cat /version.json
	I0906 12:20:42.263476   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.263583   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.263606   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.263691   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.263709   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.263807   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.263822   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.263897   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.263913   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.349749   13103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 12:20:42.349790   13103 command_runner.go:130] > {"iso_version": "v1.34.0", "kicbase_version": "v0.0.44-1724862063-19530", "minikube_version": "v1.34.0", "commit": "613a681f9f90c87e637792fcb55bc4d32fe5c29c"}
	I0906 12:20:42.349946   13103 ssh_runner.go:195] Run: systemctl --version
	I0906 12:20:42.354330   13103 command_runner.go:130] > systemd 252 (252)
	I0906 12:20:42.354353   13103 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0906 12:20:42.354539   13103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:20:42.358516   13103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 12:20:42.358541   13103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:20:42.358584   13103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:20:42.371660   13103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0906 12:20:42.371693   13103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:20:42.371706   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:20:42.371808   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:20:42.386518   13103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0906 12:20:42.386805   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:20:42.395515   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:20:42.404507   13103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:20:42.404553   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:20:42.413199   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:20:42.422017   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:20:42.430768   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:20:42.439534   13103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:20:42.448644   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:20:42.457341   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:20:42.465857   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:20:42.474621   13103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:20:42.482317   13103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 12:20:42.482490   13103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:20:42.490267   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:42.589095   13103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:20:42.608272   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:20:42.608350   13103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:20:42.622568   13103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0906 12:20:42.622694   13103 command_runner.go:130] > [Unit]
	I0906 12:20:42.622704   13103 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 12:20:42.622712   13103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 12:20:42.622718   13103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0906 12:20:42.622723   13103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0906 12:20:42.622727   13103 command_runner.go:130] > StartLimitBurst=3
	I0906 12:20:42.622731   13103 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 12:20:42.622734   13103 command_runner.go:130] > [Service]
	I0906 12:20:42.622737   13103 command_runner.go:130] > Type=notify
	I0906 12:20:42.622740   13103 command_runner.go:130] > Restart=on-failure
	I0906 12:20:42.622747   13103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 12:20:42.622754   13103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 12:20:42.622761   13103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 12:20:42.622766   13103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 12:20:42.622771   13103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 12:20:42.622777   13103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 12:20:42.622784   13103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 12:20:42.622791   13103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 12:20:42.622797   13103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 12:20:42.622806   13103 command_runner.go:130] > ExecStart=
	I0906 12:20:42.622822   13103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0906 12:20:42.622829   13103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 12:20:42.622836   13103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 12:20:42.622842   13103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 12:20:42.622845   13103 command_runner.go:130] > LimitNOFILE=infinity
	I0906 12:20:42.622850   13103 command_runner.go:130] > LimitNPROC=infinity
	I0906 12:20:42.622853   13103 command_runner.go:130] > LimitCORE=infinity
	I0906 12:20:42.622858   13103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 12:20:42.622862   13103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 12:20:42.622866   13103 command_runner.go:130] > TasksMax=infinity
	I0906 12:20:42.622882   13103 command_runner.go:130] > TimeoutStartSec=0
	I0906 12:20:42.622891   13103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 12:20:42.622895   13103 command_runner.go:130] > Delegate=yes
	I0906 12:20:42.622900   13103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 12:20:42.622904   13103 command_runner.go:130] > KillMode=process
	I0906 12:20:42.622908   13103 command_runner.go:130] > [Install]
	I0906 12:20:42.622922   13103 command_runner.go:130] > WantedBy=multi-user.target
	I0906 12:20:42.623046   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:20:42.635107   13103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:20:42.650265   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:20:42.660442   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:20:42.670557   13103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:20:42.687733   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:20:42.698135   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:20:42.712589   13103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 12:20:42.712891   13103 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:20:42.715807   13103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0906 12:20:42.715876   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:20:42.723104   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:20:42.736529   13103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:20:42.845157   13103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:20:42.954660   13103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:20:42.954733   13103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:20:42.970878   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:43.069021   13103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:20:45.394719   13103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325687442s)
	I0906 12:20:45.394781   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:20:45.405825   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:20:45.415611   13103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:20:45.518550   13103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:20:45.620332   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:45.730400   13103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:20:45.744586   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:20:45.756085   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:45.867521   13103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:20:45.926066   13103 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:20:45.926144   13103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:20:45.930542   13103 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 12:20:45.930554   13103 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 12:20:45.930559   13103 command_runner.go:130] > Device: 0,22	Inode: 771         Links: 1
	I0906 12:20:45.930564   13103 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0906 12:20:45.930568   13103 command_runner.go:130] > Access: 2024-09-06 19:20:46.012191218 +0000
	I0906 12:20:45.930573   13103 command_runner.go:130] > Modify: 2024-09-06 19:20:46.012191218 +0000
	I0906 12:20:45.930577   13103 command_runner.go:130] > Change: 2024-09-06 19:20:46.014191220 +0000
	I0906 12:20:45.930581   13103 command_runner.go:130] >  Birth: -
	I0906 12:20:45.930604   13103 start.go:563] Will wait 60s for crictl version
	I0906 12:20:45.930645   13103 ssh_runner.go:195] Run: which crictl
	I0906 12:20:45.933399   13103 command_runner.go:130] > /usr/bin/crictl
	I0906 12:20:45.933622   13103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:20:45.962193   13103 command_runner.go:130] > Version:  0.1.0
	I0906 12:20:45.962207   13103 command_runner.go:130] > RuntimeName:  docker
	I0906 12:20:45.962210   13103 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0906 12:20:45.962214   13103 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 12:20:45.963280   13103 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:20:45.963347   13103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:20:45.981353   13103 command_runner.go:130] > 27.2.0
	I0906 12:20:45.982262   13103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:20:45.999044   13103 command_runner.go:130] > 27.2.0
	I0906 12:20:46.023107   13103 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:20:46.023157   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:46.023538   13103 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:20:46.028008   13103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:20:46.038612   13103 kubeadm.go:883] updating cluster {Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:20:46.038697   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:20:46.038752   13103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:20:46.051833   13103 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0906 12:20:46.051846   13103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:20:46.051850   13103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:20:46.051855   13103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:20:46.051858   13103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:20:46.051862   13103 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0906 12:20:46.051865   13103 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0906 12:20:46.051871   13103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:20:46.051877   13103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:20:46.051882   13103 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 12:20:46.051948   13103 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:20:46.051957   13103 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:20:46.052037   13103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:20:46.064745   13103 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0906 12:20:46.064758   13103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:20:46.064762   13103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:20:46.064766   13103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:20:46.064769   13103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:20:46.064773   13103 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0906 12:20:46.064776   13103 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0906 12:20:46.064787   13103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:20:46.064792   13103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:20:46.064796   13103 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 12:20:46.065514   13103 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:20:46.065534   13103 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:20:46.065544   13103 kubeadm.go:934] updating node { 192.169.0.33 8443 v1.31.0 docker true true} ...
	I0906 12:20:46.065620   13103 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-459000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:20:46.065684   13103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:20:46.101901   13103 command_runner.go:130] > cgroupfs
	I0906 12:20:46.102506   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:46.102517   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:46.102527   13103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:20:46.102543   13103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.33 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-459000 NodeName:multinode-459000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:20:46.102625   13103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-459000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:20:46.102686   13103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:20:46.111110   13103 command_runner.go:130] > kubeadm
	I0906 12:20:46.111117   13103 command_runner.go:130] > kubectl
	I0906 12:20:46.111120   13103 command_runner.go:130] > kubelet
	I0906 12:20:46.111230   13103 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:20:46.111277   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:20:46.119320   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0906 12:20:46.132438   13103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:20:46.146346   13103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0906 12:20:46.160046   13103 ssh_runner.go:195] Run: grep 192.169.0.33	control-plane.minikube.internal$ /etc/hosts
	I0906 12:20:46.162862   13103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:20:46.172928   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:46.273763   13103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:20:46.288239   13103 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000 for IP: 192.169.0.33
	I0906 12:20:46.288251   13103 certs.go:194] generating shared ca certs ...
	I0906 12:20:46.288261   13103 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:46.288443   13103 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:20:46.288516   13103 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:20:46.288526   13103 certs.go:256] generating profile certs ...
	I0906 12:20:46.288635   13103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.key
	I0906 12:20:46.288722   13103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key.154086e5
	I0906 12:20:46.288789   13103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key
	I0906 12:20:46.288802   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:20:46.288824   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:20:46.288840   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:20:46.288861   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:20:46.288878   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:20:46.288913   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:20:46.288942   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:20:46.288960   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:20:46.289058   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:20:46.289106   13103 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:20:46.289115   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:20:46.289188   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:20:46.289239   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:20:46.289279   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:20:46.289387   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:20:46.289437   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.289463   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.289483   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.289983   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:20:46.323599   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:20:46.349693   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:20:46.380553   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:20:46.405494   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 12:20:46.425404   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:20:46.445154   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:20:46.464970   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 12:20:46.484693   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:20:46.504348   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:20:46.523910   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:20:46.543476   13103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:20:46.556852   13103 ssh_runner.go:195] Run: openssl version
	I0906 12:20:46.560972   13103 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0906 12:20:46.561024   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:20:46.569323   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572714   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572823   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572861   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.576889   13103 command_runner.go:130] > 3ec20f2e
	I0906 12:20:46.577051   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:20:46.585363   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:20:46.593723   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.596951   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.597034   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.597071   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.601216   13103 command_runner.go:130] > b5213941
	I0906 12:20:46.601259   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:20:46.609583   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:20:46.618022   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621405   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621429   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621461   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.625579   13103 command_runner.go:130] > 51391683
	I0906 12:20:46.625701   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:20:46.634117   13103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:20:46.637570   13103 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:20:46.637580   13103 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0906 12:20:46.637586   13103 command_runner.go:130] > Device: 253,1	Inode: 3148599     Links: 1
	I0906 12:20:46.637591   13103 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 12:20:46.637598   13103 command_runner.go:130] > Access: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637604   13103 command_runner.go:130] > Modify: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637608   13103 command_runner.go:130] > Change: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637612   13103 command_runner.go:130] >  Birth: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637725   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:20:46.642003   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.642072   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:20:46.646243   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.646295   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:20:46.650659   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.650716   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:20:46.654983   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.655072   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:20:46.659282   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.659324   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:20:46.663431   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.663587   13103 kubeadm.go:392] StartCluster: {Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provis
ioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:46.663700   13103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:20:46.680120   13103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:20:46.687982   13103 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 12:20:46.687996   13103 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 12:20:46.688003   13103 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 12:20:46.688008   13103 command_runner.go:130] > member
	I0906 12:20:46.688054   13103 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:20:46.688064   13103 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:20:46.688107   13103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:20:46.695454   13103 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:20:46.695768   13103 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-459000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:46.695853   13103 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-459000" cluster setting kubeconfig missing "multinode-459000" context setting]
	I0906 12:20:46.696079   13103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:46.696780   13103 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:46.696975   13103 kapi.go:59] client config for multinode-459000: &rest.Config{Host:"https://192.169.0.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa883ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:20:46.697305   13103 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:20:46.697478   13103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:20:46.704887   13103 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.33
	I0906 12:20:46.704905   13103 kubeadm.go:1160] stopping kube-system containers ...
	I0906 12:20:46.704959   13103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:20:46.723369   13103 command_runner.go:130] > 12b00d3e81cd
	I0906 12:20:46.723381   13103 command_runner.go:130] > b8675b45ba97
	I0906 12:20:46.723384   13103 command_runner.go:130] > 0516c7173c76
	I0906 12:20:46.723387   13103 command_runner.go:130] > 6766a97ec06f
	I0906 12:20:46.723391   13103 command_runner.go:130] > b2cede164434
	I0906 12:20:46.723394   13103 command_runner.go:130] > e4605e60128b
	I0906 12:20:46.723411   13103 command_runner.go:130] > 98079ff18be9
	I0906 12:20:46.723418   13103 command_runner.go:130] > 68811f115b6f
	I0906 12:20:46.723422   13103 command_runner.go:130] > 7158af8be341
	I0906 12:20:46.723426   13103 command_runner.go:130] > fde17951087f
	I0906 12:20:46.723432   13103 command_runner.go:130] > 487be703273e
	I0906 12:20:46.723435   13103 command_runner.go:130] > 95c1a9b114b1
	I0906 12:20:46.723445   13103 command_runner.go:130] > 03508ab110f1
	I0906 12:20:46.723449   13103 command_runner.go:130] > 8b8fefcb9e0b
	I0906 12:20:46.723452   13103 command_runner.go:130] > 6f313c531f3e
	I0906 12:20:46.723455   13103 command_runner.go:130] > 8455632502ed
	I0906 12:20:46.724125   13103 docker.go:483] Stopping containers: [12b00d3e81cd b8675b45ba97 0516c7173c76 6766a97ec06f b2cede164434 e4605e60128b 98079ff18be9 68811f115b6f 7158af8be341 fde17951087f 487be703273e 95c1a9b114b1 03508ab110f1 8b8fefcb9e0b 6f313c531f3e 8455632502ed]
	I0906 12:20:46.724190   13103 ssh_runner.go:195] Run: docker stop 12b00d3e81cd b8675b45ba97 0516c7173c76 6766a97ec06f b2cede164434 e4605e60128b 98079ff18be9 68811f115b6f 7158af8be341 fde17951087f 487be703273e 95c1a9b114b1 03508ab110f1 8b8fefcb9e0b 6f313c531f3e 8455632502ed
	I0906 12:20:46.738443   13103 command_runner.go:130] > 12b00d3e81cd
	I0906 12:20:46.738474   13103 command_runner.go:130] > b8675b45ba97
	I0906 12:20:46.738657   13103 command_runner.go:130] > 0516c7173c76
	I0906 12:20:46.738757   13103 command_runner.go:130] > 6766a97ec06f
	I0906 12:20:46.738837   13103 command_runner.go:130] > b2cede164434
	I0906 12:20:46.738974   13103 command_runner.go:130] > e4605e60128b
	I0906 12:20:46.739000   13103 command_runner.go:130] > 98079ff18be9
	I0906 12:20:46.739061   13103 command_runner.go:130] > 68811f115b6f
	I0906 12:20:46.739156   13103 command_runner.go:130] > 7158af8be341
	I0906 12:20:46.739263   13103 command_runner.go:130] > fde17951087f
	I0906 12:20:46.739379   13103 command_runner.go:130] > 487be703273e
	I0906 12:20:46.739467   13103 command_runner.go:130] > 95c1a9b114b1
	I0906 12:20:46.739588   13103 command_runner.go:130] > 03508ab110f1
	I0906 12:20:46.739640   13103 command_runner.go:130] > 8b8fefcb9e0b
	I0906 12:20:46.739757   13103 command_runner.go:130] > 6f313c531f3e
	I0906 12:20:46.739869   13103 command_runner.go:130] > 8455632502ed
	I0906 12:20:46.740823   13103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:20:46.753311   13103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:20:46.762059   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0906 12:20:46.762071   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0906 12:20:46.762077   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0906 12:20:46.762083   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:20:46.762204   13103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:20:46.762210   13103 kubeadm.go:157] found existing configuration files:
	
	I0906 12:20:46.762252   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 12:20:46.769254   13103 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:20:46.769280   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:20:46.769328   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:20:46.776572   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 12:20:46.783758   13103 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:20:46.783776   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:20:46.783811   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:20:46.791113   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 12:20:46.798161   13103 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:20:46.798183   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:20:46.798220   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:20:46.805713   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 12:20:46.812921   13103 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:20:46.812949   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:20:46.812990   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:20:46.820390   13103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:20:46.827763   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:46.898290   13103 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:20:46.898453   13103 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 12:20:46.898625   13103 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 12:20:46.898765   13103 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:20:46.898960   13103 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 12:20:46.899098   13103 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:20:46.899397   13103 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 12:20:46.899561   13103 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 12:20:46.899681   13103 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:20:46.899817   13103 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:20:46.899989   13103 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:20:46.900143   13103 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 12:20:46.900985   13103 command_runner.go:130] ! W0906 19:20:47.031470    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:46.901004   13103 command_runner.go:130] ! W0906 19:20:47.032174    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:46.901041   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:46.935711   13103 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:20:47.096680   13103 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:20:47.204439   13103 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 12:20:47.365845   13103 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:20:47.451527   13103 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:20:47.525150   13103 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:20:47.527254   13103 command_runner.go:130] ! W0906 19:20:47.069183    1330 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.527272   13103 command_runner.go:130] ! W0906 19:20:47.069676    1330 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.527286   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.576279   13103 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:20:47.581148   13103 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:20:47.581159   13103 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 12:20:47.689821   13103 command_runner.go:130] ! W0906 19:20:47.697610    1335 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.689851   13103 command_runner.go:130] ! W0906 19:20:47.698106    1335 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.689868   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.746190   13103 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:20:47.746600   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:20:47.748596   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:20:47.749246   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:20:47.750702   13103 command_runner.go:130] ! W0906 19:20:47.870242    1362 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.750732   13103 command_runner.go:130] ! W0906 19:20:47.871098    1362 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.750753   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.814153   13103 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:20:47.826523   13103 command_runner.go:130] ! W0906 19:20:47.947508    1370 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.826546   13103 command_runner.go:130] ! W0906 19:20:47.947979    1370 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.826615   13103 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:20:47.826675   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.327215   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.827064   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.840074   13103 command_runner.go:130] > 1692
	I0906 12:20:48.840096   13103 api_server.go:72] duration metric: took 1.013496031s to wait for apiserver process to appear ...
	I0906 12:20:48.840102   13103 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:20:48.840118   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.026473   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:20:51.026490   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:20:51.026497   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.054937   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:20:51.054956   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:20:51.341860   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.346791   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 12:20:51.346809   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 12:20:51.841712   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.847377   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 12:20:51.847398   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 12:20:52.341716   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:52.345528   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:20:52.345592   13103 round_trippers.go:463] GET https://192.169.0.33:8443/version
	I0906 12:20:52.345598   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:52.345606   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:52.345609   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:52.352319   13103 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 12:20:52.352332   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:52.352337   13103 round_trippers.go:580]     Audit-Id: 5ffc807c-a78c-402c-87d3-b9b415b40e5f
	I0906 12:20:52.352340   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:52.352350   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:52.352354   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:52.352356   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:52.352359   13103 round_trippers.go:580]     Content-Length: 263
	I0906 12:20:52.352363   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:52 GMT
	I0906 12:20:52.352382   13103 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 12:20:52.352432   13103 api_server.go:141] control plane version: v1.31.0
	I0906 12:20:52.352443   13103 api_server.go:131] duration metric: took 3.512352698s to wait for apiserver health ...
	I0906 12:20:52.352449   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:52.352452   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:52.374855   13103 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 12:20:52.395566   13103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 12:20:52.402927   13103 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 12:20:52.402941   13103 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0906 12:20:52.402950   13103 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0906 12:20:52.402955   13103 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 12:20:52.402959   13103 command_runner.go:130] > Access: 2024-09-06 19:20:14.852309625 +0000
	I0906 12:20:52.402966   13103 command_runner.go:130] > Modify: 2024-09-03 22:42:55.000000000 +0000
	I0906 12:20:52.402971   13103 command_runner.go:130] > Change: 2024-09-06 19:20:13.268309735 +0000
	I0906 12:20:52.402978   13103 command_runner.go:130] >  Birth: -
	I0906 12:20:52.405546   13103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0906 12:20:52.405555   13103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0906 12:20:52.439971   13103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 12:20:52.805772   13103 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 12:20:52.854248   13103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 12:20:52.933352   13103 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 12:20:53.005604   13103 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 12:20:53.007357   13103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:20:53.007404   13103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:20:53.007414   13103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:20:53.007474   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:20:53.007480   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.007486   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.007490   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.009554   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.009563   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.009569   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.009572   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.009575   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.009579   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.009591   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.009594   13103 round_trippers.go:580]     Audit-Id: 55484294-9cbd-46c9-bee1-1b642c12b69d
	I0906 12:20:53.010487   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"849"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0906 12:20:53.013723   13103 system_pods.go:59] 12 kube-system pods found
	I0906 12:20:53.013738   13103 system_pods.go:61] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:20:53.013744   13103 system_pods.go:61] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 12:20:53.013748   13103 system_pods.go:61] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:20:53.013756   13103 system_pods.go:61] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:20:53.013760   13103 system_pods.go:61] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:20:53.013767   13103 system_pods.go:61] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:20:53.013771   13103 system_pods.go:61] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:20:53.013776   13103 system_pods.go:61] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:20:53.013780   13103 system_pods.go:61] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:20:53.013783   13103 system_pods.go:61] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:20:53.013786   13103 system_pods.go:61] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 12:20:53.013790   13103 system_pods.go:61] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running
	I0906 12:20:53.013794   13103 system_pods.go:74] duration metric: took 6.429185ms to wait for pod list to return data ...
	I0906 12:20:53.013800   13103 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:20:53.013833   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes
	I0906 12:20:53.013837   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.013843   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.013846   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.015478   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.015502   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.015511   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.015514   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.015517   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.015520   13103 round_trippers.go:580]     Audit-Id: 30570eec-545b-4745-8743-a1cab2a3fb29
	I0906 12:20:53.015523   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.015525   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.015644   13103 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"849"},"items":[{"metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14782 chars]
	I0906 12:20:53.016196   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016209   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016218   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016221   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016225   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016229   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016233   13103 node_conditions.go:105] duration metric: took 2.429093ms to run NodePressure ...
	I0906 12:20:53.016243   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:53.160252   13103 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 12:20:53.282226   13103 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 12:20:53.283414   13103 command_runner.go:130] ! W0906 19:20:53.201637    2133 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:53.283436   13103 command_runner.go:130] ! W0906 19:20:53.202191    2133 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:53.283454   13103 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 12:20:53.283521   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0906 12:20:53.283525   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.283530   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.283534   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.285658   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.285667   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.285674   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.285678   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.285683   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.285688   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.285692   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.285695   13103 round_trippers.go:580]     Audit-Id: dfd8d4ba-250d-43fd-a3c9-7094cfa9b329
	I0906 12:20:53.286150   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"820","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31218 chars]
	I0906 12:20:53.286880   13103 kubeadm.go:739] kubelet initialised
	I0906 12:20:53.286889   13103 kubeadm.go:740] duration metric: took 3.428745ms waiting for restarted kubelet to initialise ...
	I0906 12:20:53.286897   13103 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:20:53.286928   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:20:53.286933   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.286939   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.286944   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.289064   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.289072   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.289076   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.289080   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.289082   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.289085   13103 round_trippers.go:580]     Audit-Id: f185bee8-cf54-428e-9251-f89670109af4
	I0906 12:20:53.289088   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.289091   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.290451   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0906 12:20:53.293407   13103 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.293459   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:20:53.293464   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.293470   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.293475   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.295326   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.295335   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.295339   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.295342   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.295345   13103 round_trippers.go:580]     Audit-Id: 83fb4e68-22fb-4080-a855-59e8a5c87034
	I0906 12:20:53.295348   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.295350   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.295353   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.295454   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:20:53.295719   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.295727   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.295733   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.295737   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.297662   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.297677   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.297685   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.297688   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.297691   13103 round_trippers.go:580]     Audit-Id: e5df61ad-c106-47e5-bbc3-4070002c5b9e
	I0906 12:20:53.297694   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.297697   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.297699   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.297927   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.298135   13103 pod_ready.go:98] node "multinode-459000" hosting pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.298146   13103 pod_ready.go:82] duration metric: took 4.727596ms for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.298153   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.298161   13103 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.298194   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-459000
	I0906 12:20:53.298199   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.298205   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.298209   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.299621   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.299629   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.299635   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.299638   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.299642   13103 round_trippers.go:580]     Audit-Id: be77759d-114f-4c80-a5d1-184591aa7427
	I0906 12:20:53.299645   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.299648   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.299650   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.299898   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"820","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6887 chars]
	I0906 12:20:53.300165   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.300172   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.300178   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.300181   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.302558   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.302567   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.302573   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.302576   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.302579   13103 round_trippers.go:580]     Audit-Id: 978a43f1-4d45-4094-ad01-bc549f492e2e
	I0906 12:20:53.302582   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.302586   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.302589   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.302801   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.302977   13103 pod_ready.go:98] node "multinode-459000" hosting pod "etcd-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.302989   13103 pod_ready.go:82] duration metric: took 4.821114ms for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.302995   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "etcd-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.303006   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.303035   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-459000
	I0906 12:20:53.303040   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.303045   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.303049   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.304725   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.304734   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.304739   13103 round_trippers.go:580]     Audit-Id: 744b1630-f218-49d7-bf9e-0874b8ae067c
	I0906 12:20:53.304749   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.304757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.304762   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.304765   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.304768   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.305009   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-459000","namespace":"kube-system","uid":"a7ee0531-75a6-405c-928c-1185a0e5ebd0","resourceVersion":"817","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.33:8443","kubernetes.io/config.hash":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.mirror":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.seen":"2024-09-06T19:16:52.157527221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0906 12:20:53.305246   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.305252   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.305260   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.305264   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.306599   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.306606   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.306611   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.306614   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.306617   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.306621   13103 round_trippers.go:580]     Audit-Id: 9726eabf-d52a-40b0-a363-c7385d06aab6
	I0906 12:20:53.306623   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.306625   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.306860   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.307038   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-apiserver-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.307048   13103 pod_ready.go:82] duration metric: took 4.037219ms for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.307054   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-apiserver-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.307059   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.307089   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-459000
	I0906 12:20:53.307094   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.307099   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.307103   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.308747   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.308756   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.308763   13103 round_trippers.go:580]     Audit-Id: 9a50b907-1158-4251-97c3-8744af1d441b
	I0906 12:20:53.308797   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.308802   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.308806   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.308810   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.308812   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.308934   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-459000","namespace":"kube-system","uid":"ef9a4034-636f-4d52-b328-40aff0e03ccb","resourceVersion":"818","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.mirror":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.seen":"2024-09-06T19:16:52.157528036Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0906 12:20:53.409587   13103 request.go:632] Waited for 100.344038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.409636   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.409642   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.409649   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.409678   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.411918   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.411930   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.411938   13103 round_trippers.go:580]     Audit-Id: 36f8d9a0-08c1-4900-a883-c98118ddb954
	I0906 12:20:53.411943   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.411948   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.411951   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.411976   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.411984   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.412084   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.412281   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-controller-manager-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.412293   13103 pod_ready.go:82] duration metric: took 105.228203ms for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.412300   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-controller-manager-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.412305   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.609009   13103 request.go:632] Waited for 196.662551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:20:53.609093   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:20:53.609102   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.609109   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.609117   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.610900   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.610911   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.610918   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.610924   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.610933   13103 round_trippers.go:580]     Audit-Id: 8f5d6aad-4ab1-48b5-889e-18c35f8c2f26
	I0906 12:20:53.610936   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.610940   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.610944   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.611070   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crzpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"253c78d8-0d56-49e8-a00c-99218c50beac","resourceVersion":"505","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:20:53.809000   13103 request.go:632] Waited for 197.654908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:20:53.809067   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:20:53.809076   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.809084   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.809090   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.810657   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.810685   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.810691   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.810694   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.810697   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.810700   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.810704   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.810706   13103 round_trippers.go:580]     Audit-Id: 5e585264-4859-4285-aeec-7287183c8596
	I0906 12:20:53.810806   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m02","uid":"42483c05-2f0a-48b5-a783-4c5958284f86","resourceVersion":"573","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_17_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3818 chars]
	I0906 12:20:53.810982   13103 pod_ready.go:93] pod "kube-proxy-crzpl" in "kube-system" namespace has status "Ready":"True"
	I0906 12:20:53.810990   13103 pod_ready.go:82] duration metric: took 398.681997ms for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.810997   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.009014   13103 request.go:632] Waited for 197.982629ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:20:54.009087   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:20:54.009094   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.009120   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.009127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.010937   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.010949   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.010956   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.010962   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.010969   13103 round_trippers.go:580]     Audit-Id: 16e1e167-aa04-4560-aac6-3565f9b98f3d
	I0906 12:20:54.010975   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.010978   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.010980   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.011063   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t24bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"626397be-3b5a-4dd4-8932-283e8edb0d27","resourceVersion":"849","creationTimestamp":"2024-09-06T19:16:56Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0906 12:20:54.209036   13103 request.go:632] Waited for 197.706507ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:54.209076   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:54.209082   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.209116   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.209123   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.210677   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.210689   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.210699   13103 round_trippers.go:580]     Audit-Id: 84f2f37b-1511-4669-aa06-cc83e829c4c3
	I0906 12:20:54.210707   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.210716   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.210722   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.210730   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.210734   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.210986   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:54.211173   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-proxy-t24bs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:54.211182   13103 pod_ready.go:82] duration metric: took 400.183556ms for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:54.211191   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-proxy-t24bs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:54.211199   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.409019   13103 request.go:632] Waited for 197.785012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:20:54.409077   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:20:54.409083   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.409089   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.409093   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.410708   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.410718   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.410723   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.410726   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.410729   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.410733   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.410735   13103 round_trippers.go:580]     Audit-Id: 9bdf799c-01ec-497a-9877-acc5ee1c1400
	I0906 12:20:54.410738   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.410823   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vqcpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6","resourceVersion":"735","creationTimestamp":"2024-09-06T19:18:30Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:20:54.607514   13103 request.go:632] Waited for 196.41514ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:20:54.607567   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:20:54.607574   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.607581   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.607587   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.609573   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.609582   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.609587   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.609598   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.609601   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.609604   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.609606   13103 round_trippers.go:580]     Audit-Id: bc2698b5-26ee-4b75-8329-688459bdcba8
	I0906 12:20:54.609613   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.609723   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m03","uid":"6c54d256-cf96-4ec0-9d0b-36c85c77ef2b","resourceVersion":"760","creationTimestamp":"2024-09-06T19:19:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_19_25_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:19:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3635 chars]
	I0906 12:20:54.609895   13103 pod_ready.go:93] pod "kube-proxy-vqcpj" in "kube-system" namespace has status "Ready":"True"
	I0906 12:20:54.609903   13103 pod_ready.go:82] duration metric: took 398.702285ms for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.609909   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.809054   13103 request.go:632] Waited for 199.102039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:20:54.809116   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:20:54.809123   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.809130   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.809135   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.811199   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:54.811208   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.811213   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.811217   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.811220   13103 round_trippers.go:580]     Audit-Id: 14d96cdd-752b-4f32-81b5-946d2a4fb9c9
	I0906 12:20:54.811222   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.811226   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.811232   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.811498   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-459000","namespace":"kube-system","uid":"4602221a-c2e8-4f7d-a31e-2910196cb32b","resourceVersion":"819","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.mirror":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.seen":"2024-09-06T19:16:46.929338017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0906 12:20:55.009522   13103 request.go:632] Waited for 197.762294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.009571   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.009578   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.009584   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.009588   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.011014   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:55.011021   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.011025   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.011031   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.011033   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.011038   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.011041   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.011044   13103 round_trippers.go:580]     Audit-Id: 74dc264e-7739-4a48-972c-506fbb05ade8
	I0906 12:20:55.011131   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:55.011329   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-scheduler-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:55.011339   13103 pod_ready.go:82] duration metric: took 401.42623ms for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:55.011345   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-scheduler-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:55.011353   13103 pod_ready.go:39] duration metric: took 1.724456804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:20:55.011367   13103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:20:55.022277   13103 command_runner.go:130] > -16
	I0906 12:20:55.022455   13103 ops.go:34] apiserver oom_adj: -16
	I0906 12:20:55.022461   13103 kubeadm.go:597] duration metric: took 8.334425046s to restartPrimaryControlPlane
	I0906 12:20:55.022467   13103 kubeadm.go:394] duration metric: took 8.358925932s to StartCluster
	I0906 12:20:55.022482   13103 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:55.022574   13103 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:55.022988   13103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:55.023242   13103 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:20:55.023269   13103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:20:55.023397   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:55.046055   13103 out.go:177] * Verifying Kubernetes components...
	I0906 12:20:55.088345   13103 out.go:177] * Enabled addons: 
	I0906 12:20:55.109104   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:55.130229   13103 addons.go:510] duration metric: took 106.968501ms for enable addons: enabled=[]
	I0906 12:20:55.271679   13103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:20:55.282375   13103 node_ready.go:35] waiting up to 6m0s for node "multinode-459000" to be "Ready" ...
	I0906 12:20:55.282438   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.282444   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.282450   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.282453   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.283922   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:55.283933   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.283939   13103 round_trippers.go:580]     Audit-Id: e487ae5e-005a-48d5-b58f-3d58f014af16
	I0906 12:20:55.283945   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.283948   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.283952   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.283955   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.283957   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.284279   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:55.784190   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.784216   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.784227   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.784232   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.787081   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:55.787097   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.787104   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.787116   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.787120   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.787124   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.787128   13103 round_trippers.go:580]     Audit-Id: 613bdd38-a63c-46c4-ad1d-e23b6b4ead50
	I0906 12:20:55.787132   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.787225   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:56.283042   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:56.283069   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:56.283081   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:56.283086   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:56.285909   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:56.285924   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:56.285931   13103 round_trippers.go:580]     Audit-Id: 336e04a7-5b77-468d-a980-45a2482d9f8c
	I0906 12:20:56.285935   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:56.285938   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:56.285942   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:56.285946   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:56.285949   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:56 GMT
	I0906 12:20:56.286021   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:56.783350   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:56.783375   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:56.783387   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:56.783394   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:56.786321   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:56.786335   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:56.786342   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:56 GMT
	I0906 12:20:56.786346   13103 round_trippers.go:580]     Audit-Id: 4d25f0ac-c3f5-4f16-98b8-45432f07e35c
	I0906 12:20:56.786350   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:56.786354   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:56.786358   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:56.786361   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:56.786856   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:57.282948   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:57.282975   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:57.282986   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:57.282992   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:57.285671   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:57.285684   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:57.285691   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:57 GMT
	I0906 12:20:57.285695   13103 round_trippers.go:580]     Audit-Id: e05176ca-d4e5-4302-8520-49057bbbad74
	I0906 12:20:57.285699   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:57.285703   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:57.285720   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:57.285733   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:57.285862   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:57.286129   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:20:57.784635   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:57.784663   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:57.784701   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:57.784710   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:57.787321   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:57.787336   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:57.787343   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:57.787348   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:57 GMT
	I0906 12:20:57.787353   13103 round_trippers.go:580]     Audit-Id: 22a92049-2ac6-4f14-a36b-43fdd32ce11f
	I0906 12:20:57.787357   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:57.787363   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:57.787366   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:57.787656   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:58.282909   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:58.282936   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:58.282951   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:58.282957   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:58.285758   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:58.285775   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:58.285783   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:58 GMT
	I0906 12:20:58.285789   13103 round_trippers.go:580]     Audit-Id: d296704e-2819-42cc-ba15-d8774b071678
	I0906 12:20:58.285795   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:58.285801   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:58.285806   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:58.285811   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:58.285911   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:58.782638   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:58.782660   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:58.782696   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:58.782704   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:58.784836   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:58.784849   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:58.784856   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:58 GMT
	I0906 12:20:58.784862   13103 round_trippers.go:580]     Audit-Id: 7471c8ca-95e1-4e27-b818-6a3ee6a94f84
	I0906 12:20:58.784867   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:58.784873   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:58.784875   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:58.784878   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:58.784952   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.283284   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:59.283306   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:59.283315   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:59.283324   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:59.285640   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:59.285651   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:59.285657   13103 round_trippers.go:580]     Audit-Id: b32b984e-803b-45c6-a485-2f6621da8200
	I0906 12:20:59.285659   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:59.285663   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:59.285665   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:59.285669   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:59.285672   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:59 GMT
	I0906 12:20:59.285774   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.783737   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:59.783761   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:59.783773   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:59.783780   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:59.786325   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:59.786343   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:59.786351   13103 round_trippers.go:580]     Audit-Id: 9216e10e-3b70-4a91-9a52-a8a339880eb8
	I0906 12:20:59.786357   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:59.786360   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:59.786364   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:59.786367   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:59.786374   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:59 GMT
	I0906 12:20:59.786769   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.787020   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:00.283378   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:00.283465   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:00.283478   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:00.283485   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:00.285634   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:00.285646   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:00.285651   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:00.285654   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:00.285661   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:00.285663   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:00.285683   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:00 GMT
	I0906 12:21:00.285688   13103 round_trippers.go:580]     Audit-Id: b74fdec6-ab72-46ec-970e-11133a30eb49
	I0906 12:21:00.285749   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:00.782855   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:00.782871   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:00.782877   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:00.782880   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:00.785063   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:00.785077   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:00.785083   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:00 GMT
	I0906 12:21:00.785086   13103 round_trippers.go:580]     Audit-Id: b8b54770-654c-47ac-bb70-f47239d9a85f
	I0906 12:21:00.785090   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:00.785094   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:00.785097   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:00.785100   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:00.785269   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.283867   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:01.283894   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:01.283904   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:01.283910   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:01.286375   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:01.286388   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:01.286397   13103 round_trippers.go:580]     Audit-Id: a8e07055-17c9-44ef-a99d-9029a0fff2ce
	I0906 12:21:01.286401   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:01.286433   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:01.286441   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:01.286445   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:01.286450   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:01 GMT
	I0906 12:21:01.286643   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.784066   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:01.784089   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:01.784101   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:01.784110   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:01.786790   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:01.786802   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:01.786808   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:01.786810   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:01 GMT
	I0906 12:21:01.786818   13103 round_trippers.go:580]     Audit-Id: 709cfb3e-a937-4f70-b01f-a375a7ecd6d2
	I0906 12:21:01.786822   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:01.786824   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:01.786827   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:01.787030   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.787224   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:02.283110   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:02.283218   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:02.283234   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:02.283241   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:02.285929   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:02.285942   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:02.285947   13103 round_trippers.go:580]     Audit-Id: 23e67746-8645-42b6-b246-9ea7bad09da7
	I0906 12:21:02.285950   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:02.285952   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:02.285954   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:02.285957   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:02.285980   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:02 GMT
	I0906 12:21:02.286063   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:02.784562   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:02.784589   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:02.784601   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:02.784607   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:02.787179   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:02.787191   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:02.787196   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:02.787199   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:02 GMT
	I0906 12:21:02.787202   13103 round_trippers.go:580]     Audit-Id: 3138c6d4-06dc-4784-ad86-3d2bf39d9d18
	I0906 12:21:02.787204   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:02.787207   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:02.787210   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:02.787360   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:03.282839   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:03.282867   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:03.282879   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:03.282887   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:03.285832   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:03.285850   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:03.285857   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:03.285865   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:03.285869   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:03.285874   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:03 GMT
	I0906 12:21:03.285878   13103 round_trippers.go:580]     Audit-Id: c405913f-9342-44dc-931f-f8414fcdd19e
	I0906 12:21:03.285882   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:03.285942   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:03.782685   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:03.782706   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:03.782716   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:03.782721   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:03.785444   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:03.785456   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:03.785462   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:03.785465   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:03.785468   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:03.785471   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:03 GMT
	I0906 12:21:03.785473   13103 round_trippers.go:580]     Audit-Id: 5a0d7dbe-5224-44ee-a0df-2ba863732ca1
	I0906 12:21:03.785477   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:03.785734   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:04.282619   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:04.282642   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:04.282654   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:04.282662   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:04.285440   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:04.285454   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:04.285462   13103 round_trippers.go:580]     Audit-Id: 7fa3551b-6c18-4c05-a1f9-feedce2df755
	I0906 12:21:04.285465   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:04.285468   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:04.285472   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:04.285476   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:04.285479   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:04 GMT
	I0906 12:21:04.285554   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:04.285813   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:04.783450   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:04.783471   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:04.783483   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:04.783492   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:04.786538   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:04.786553   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:04.786566   13103 round_trippers.go:580]     Audit-Id: e6e70310-56ea-4d9b-9dfb-50f1853d1c43
	I0906 12:21:04.786572   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:04.786578   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:04.786582   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:04.786587   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:04.786592   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:04 GMT
	I0906 12:21:04.786801   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:05.282653   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:05.282671   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:05.282680   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:05.282687   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:05.285490   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:05.285505   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:05.285512   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:05.285517   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:05.285521   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:05.285526   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:05.285530   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:05 GMT
	I0906 12:21:05.285534   13103 round_trippers.go:580]     Audit-Id: 78254ef8-b353-4eda-8274-53fea1e71827
	I0906 12:21:05.285829   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:05.783384   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:05.783407   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:05.783417   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:05.783422   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:05.786324   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:05.786338   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:05.786346   13103 round_trippers.go:580]     Audit-Id: 7cadcf56-0277-4ba8-b4c6-6b99b793cc5a
	I0906 12:21:05.786350   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:05.786353   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:05.786358   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:05.786362   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:05.786367   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:05 GMT
	I0906 12:21:05.786462   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:06.283633   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:06.283652   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:06.283660   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:06.283665   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:06.286164   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:06.286176   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:06.286181   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:06.286184   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:06 GMT
	I0906 12:21:06.286193   13103 round_trippers.go:580]     Audit-Id: 07623995-247a-4533-b371-d74f13933cf9
	I0906 12:21:06.286197   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:06.286200   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:06.286203   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:06.286261   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:06.286456   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:06.784394   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:06.784416   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:06.784425   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:06.784430   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:06.786848   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:06.786862   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:06.786874   13103 round_trippers.go:580]     Audit-Id: ebdeab18-9907-4e9c-b0af-049ddea0dffa
	I0906 12:21:06.786882   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:06.786890   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:06.786898   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:06.786905   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:06.786910   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:06 GMT
	I0906 12:21:06.787176   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:07.283258   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:07.283286   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:07.283298   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:07.283303   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:07.286283   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:07.286298   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:07.286304   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:07 GMT
	I0906 12:21:07.286309   13103 round_trippers.go:580]     Audit-Id: 198ec8db-095b-4749-936f-50fdaebba154
	I0906 12:21:07.286313   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:07.286318   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:07.286322   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:07.286325   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:07.286389   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:07.782722   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:07.782750   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:07.782762   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:07.782772   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:07.787689   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:07.787701   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:07.787706   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:07.787709   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:07 GMT
	I0906 12:21:07.787712   13103 round_trippers.go:580]     Audit-Id: 43ab80c6-cadf-474a-a628-290349ba4713
	I0906 12:21:07.787733   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:07.787739   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:07.787742   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:07.788169   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:08.284167   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:08.284228   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:08.284239   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:08.284244   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:08.286632   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:08.286645   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:08.286651   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:08.286655   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:08.286658   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:08.286661   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:08 GMT
	I0906 12:21:08.286664   13103 round_trippers.go:580]     Audit-Id: 33efcd67-9d4d-4b18-9e85-046bf5c121f5
	I0906 12:21:08.286666   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:08.286715   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:08.286913   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:08.782461   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:08.782499   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:08.782508   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:08.782513   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:08.784535   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:08.784551   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:08.784563   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:08 GMT
	I0906 12:21:08.784568   13103 round_trippers.go:580]     Audit-Id: 52e2b62a-8c0c-4d1c-8f26-db12cc5752c5
	I0906 12:21:08.784571   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:08.784574   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:08.784578   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:08.784581   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:08.784653   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:09.283882   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:09.283906   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:09.283917   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:09.283925   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:09.286310   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:09.286322   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:09.286330   13103 round_trippers.go:580]     Audit-Id: f8e97681-9a81-4185-96fe-451b96e23c20
	I0906 12:21:09.286333   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:09.286337   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:09.286340   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:09.286364   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:09.286375   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:09 GMT
	I0906 12:21:09.286444   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:09.784111   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:09.784128   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:09.784136   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:09.784153   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:09.785974   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:09.785984   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:09.785994   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:09.786000   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:09.786004   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:09.786008   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:09 GMT
	I0906 12:21:09.786012   13103 round_trippers.go:580]     Audit-Id: 95b4ab98-eaf7-47a9-93be-d4364de7462c
	I0906 12:21:09.786014   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:09.786405   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:10.283812   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:10.283844   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:10.283887   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:10.283896   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:10.286832   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:10.286844   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:10.286850   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:10.286854   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:10 GMT
	I0906 12:21:10.286859   13103 round_trippers.go:580]     Audit-Id: c137c5aa-ce53-40a1-8e3f-d5c95e35f70b
	I0906 12:21:10.286863   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:10.286868   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:10.286872   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:10.287129   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:10.287326   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:10.782721   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:10.782741   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:10.782751   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:10.782764   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:10.785658   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:10.785670   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:10.785687   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:10.785692   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:10.785696   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:10.785700   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:10 GMT
	I0906 12:21:10.785702   13103 round_trippers.go:580]     Audit-Id: 26d17310-f8a6-4ca9-96e2-32b23e99741c
	I0906 12:21:10.785705   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:10.785807   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:11.284555   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.284583   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.284595   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.284603   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.287262   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:11.287276   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.287283   13103 round_trippers.go:580]     Audit-Id: 298419bc-1b89-4009-b333-f9ebaaac792a
	I0906 12:21:11.287287   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.287291   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.287295   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.287299   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.287303   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.287426   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:11.782776   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.782797   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.782827   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.782834   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.785262   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:11.785274   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.785280   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.785283   13103 round_trippers.go:580]     Audit-Id: 51538514-8dce-4de4-82af-a290dfaf42ba
	I0906 12:21:11.785286   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.785309   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.785316   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.785319   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.785398   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:11.785587   13103 node_ready.go:49] node "multinode-459000" has status "Ready":"True"
	I0906 12:21:11.785600   13103 node_ready.go:38] duration metric: took 16.50328117s for node "multinode-459000" to be "Ready" ...
	I0906 12:21:11.785607   13103 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:21:11.785647   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:11.785653   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.785658   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.785663   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.787289   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:11.787313   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.787322   13103 round_trippers.go:580]     Audit-Id: 2eb240ae-11a6-4539-b244-1f271eb9eb36
	I0906 12:21:11.787326   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.787330   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.787332   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.787336   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.787338   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.787991   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"908"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88963 chars]
	I0906 12:21:11.789896   13103 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:11.789934   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:11.789939   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.789945   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.789949   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.791666   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:11.791678   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.791685   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.791691   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.791694   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.791699   13103 round_trippers.go:580]     Audit-Id: e2e9113d-9ff2-4043-9551-32ea69ce30f1
	I0906 12:21:11.791703   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.791706   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.791821   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:11.792083   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.792090   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.792095   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.792099   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.793082   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:11.793091   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.793099   13103 round_trippers.go:580]     Audit-Id: af531fa0-d516-472c-b40f-a602285a709a
	I0906 12:21:11.793105   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.793110   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.793116   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.793121   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.793126   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.793281   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:12.290348   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:12.290372   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.290383   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.290392   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.292907   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:12.292922   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.292929   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.292933   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.292938   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.292941   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.292946   13103 round_trippers.go:580]     Audit-Id: edba7332-6fb8-4802-9aee-2c3c9563ae9c
	I0906 12:21:12.292949   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.293171   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:12.293453   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:12.293460   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.293465   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.293468   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.294506   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.294516   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.294522   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.294527   13103 round_trippers.go:580]     Audit-Id: 15e49219-17f5-4b87-8dc8-8dd484c4cd61
	I0906 12:21:12.294532   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.294537   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.294540   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.294543   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.294732   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:12.790491   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:12.790508   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.790518   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.790523   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.792321   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.792329   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.792334   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.792338   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.792340   13103 round_trippers.go:580]     Audit-Id: 12dc6b9d-7810-4c86-9fc1-81575bbae058
	I0906 12:21:12.792343   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.792346   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.792349   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.792438   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:12.792725   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:12.792732   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.792738   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.792743   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.794016   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.794023   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.794028   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.794031   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.794034   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.794036   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.794039   13103 round_trippers.go:580]     Audit-Id: cf5f613a-ccd9-4db7-9429-f36a136edcb0
	I0906 12:21:12.794043   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.794107   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.290999   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:13.291027   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.291039   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.291046   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.294091   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:13.294107   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.294114   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.294119   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.294122   13103 round_trippers.go:580]     Audit-Id: 2914cdba-2b31-4706-b8b6-9fc62d2eb6f8
	I0906 12:21:13.294127   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.294131   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.294136   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.294376   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:13.294791   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:13.294801   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.294809   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.294813   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.296177   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:13.296187   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.296190   13103 round_trippers.go:580]     Audit-Id: c8739fcf-eef5-458d-95ca-2d0ad6c03ca4
	I0906 12:21:13.296194   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.296198   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.296202   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.296206   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.296210   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.296360   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.791389   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:13.791416   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.791428   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.791436   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.794504   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:13.794524   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.794531   13103 round_trippers.go:580]     Audit-Id: 1e0fc598-7606-4704-947f-eff0dfcd612d
	I0906 12:21:13.794536   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.794555   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.794563   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.794567   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.794574   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.794767   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:13.795159   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:13.795169   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.795177   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.795181   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.796593   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:13.796602   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.796607   13103 round_trippers.go:580]     Audit-Id: f2819312-997f-4644-981a-c9a96a4b81c4
	I0906 12:21:13.796611   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.796613   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.796616   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.796618   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.796621   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.796684   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.796852   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:14.290091   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:14.290107   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.290116   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.290121   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.292386   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:14.292398   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.292404   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.292408   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.292420   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.292423   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.292426   13103 round_trippers.go:580]     Audit-Id: b216591b-36b4-4ea5-8115-7316edee1389
	I0906 12:21:14.292429   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.292507   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:14.292791   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:14.292798   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.292803   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.292807   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.293808   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:14.293817   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.293824   13103 round_trippers.go:580]     Audit-Id: 3d892497-eaef-4670-a14b-7ad0fc9e3ba4
	I0906 12:21:14.293829   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.293833   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.293836   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.293839   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.293841   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.294121   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:14.790294   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:14.790336   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.790350   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.790372   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.792990   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:14.793003   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.793011   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.793014   13103 round_trippers.go:580]     Audit-Id: 18d2bb7c-6a82-4cb6-83fb-3ff3f0702de1
	I0906 12:21:14.793018   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.793020   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.793023   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.793057   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.793204   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:14.793496   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:14.793503   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.793509   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.793512   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.794616   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:14.794624   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.794628   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.794632   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.794635   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.794637   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.794640   13103 round_trippers.go:580]     Audit-Id: fc4d182e-61b8-4501-ac19-a68778dfcb78
	I0906 12:21:14.794643   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.794780   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:15.290106   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:15.290127   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.290135   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.290140   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.292663   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:15.292675   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.292680   13103 round_trippers.go:580]     Audit-Id: 103f5efe-9afc-4fd2-a664-63ec6be292a5
	I0906 12:21:15.292683   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.292687   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.292690   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.292692   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.292695   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.292764   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:15.293046   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:15.293053   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.293058   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.293062   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.294226   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:15.294245   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.294254   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.294258   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.294261   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.294264   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.294266   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.294268   13103 round_trippers.go:580]     Audit-Id: eee0a50e-ed8d-4c10-b2cf-e8e447bb8f85
	I0906 12:21:15.294325   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:15.790866   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:15.790888   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.790898   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.790904   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.793667   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:15.793683   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.793689   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.793693   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.793699   13103 round_trippers.go:580]     Audit-Id: 4a951143-6879-4262-b124-530ae44f12b6
	I0906 12:21:15.793703   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.793706   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.793725   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.793900   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:15.794275   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:15.794286   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.794293   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.794297   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.795734   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:15.795744   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.795749   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.795754   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.795758   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.795762   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.795765   13103 round_trippers.go:580]     Audit-Id: 1d3366c9-abcd-444b-901f-cd8c59b24b0b
	I0906 12:21:15.795767   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.795821   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:16.290256   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:16.290275   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.290284   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.290290   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.292642   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:16.292655   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.292660   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.292663   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.292667   13103 round_trippers.go:580]     Audit-Id: d728eba4-78e6-490d-9876-de40ab3d2504
	I0906 12:21:16.292670   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.292674   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.292677   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.292961   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:16.293244   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:16.293252   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.293257   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.293261   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.294276   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:16.294285   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.294290   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.294294   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.294297   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.294300   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.294303   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.294306   13103 round_trippers.go:580]     Audit-Id: cae7b7ec-4833-4094-b8df-dbf19c7d37d2
	I0906 12:21:16.294558   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:16.294728   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:16.791363   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:16.791390   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.791402   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.791408   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.794048   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:16.794060   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.794065   13103 round_trippers.go:580]     Audit-Id: f59eb22e-fd55-4628-b75d-05898d911e96
	I0906 12:21:16.794069   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.794071   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.794075   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.794077   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.794081   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.794151   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:16.794458   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:16.794465   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.794470   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.794474   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.795665   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:16.795672   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.795678   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.795681   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.795685   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.795687   13103 round_trippers.go:580]     Audit-Id: ec472ec1-1223-49bc-8f4f-91e810fc4307
	I0906 12:21:16.795690   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.795693   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.795800   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:17.289973   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:17.289991   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.289997   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.290000   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.291730   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.291752   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.291766   13103 round_trippers.go:580]     Audit-Id: c1fa4522-de4c-4930-9edc-e416768ea52d
	I0906 12:21:17.291786   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.291792   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.291795   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.291798   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.291802   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.291902   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:17.292221   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:17.292228   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.292234   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.292237   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.293364   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.293375   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.293382   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.293386   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.293390   13103 round_trippers.go:580]     Audit-Id: 11f2e1d3-2e85-474d-9b62-a390693faa18
	I0906 12:21:17.293393   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.293395   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.293398   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.293453   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:17.790169   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:17.790185   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.790190   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.790193   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.791789   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.791801   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.791808   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.791814   13103 round_trippers.go:580]     Audit-Id: 9ac97b11-a198-4efb-8efc-0d2cca12e1db
	I0906 12:21:17.791821   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.791827   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.791833   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.791838   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.792162   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:17.792474   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:17.792481   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.792487   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.792492   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.793759   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.793771   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.793778   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.793783   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.793788   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.793792   13103 round_trippers.go:580]     Audit-Id: 3b1483dc-be8e-438f-bf9b-c9aa98fde328
	I0906 12:21:17.793796   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.793800   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.793931   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:18.290365   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:18.290394   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.290406   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.290472   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.292778   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:18.292793   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.292798   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.292802   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.292804   13103 round_trippers.go:580]     Audit-Id: 89948fe5-93dd-4262-9047-3782b382d578
	I0906 12:21:18.292807   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.292809   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.292811   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.292878   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:18.293174   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:18.293181   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.293186   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.293189   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.294291   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:18.294299   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.294311   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.294316   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.294318   13103 round_trippers.go:580]     Audit-Id: 7863f14e-37f8-425c-bace-a4f1fd6c881a
	I0906 12:21:18.294322   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.294325   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.294328   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.294974   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:18.295151   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:18.790221   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:18.790246   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.790258   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.790286   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.792634   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:18.792650   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.792658   13103 round_trippers.go:580]     Audit-Id: dff2e427-1f05-4c64-9b5f-a6b13eadb645
	I0906 12:21:18.792662   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.792666   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.792670   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.792676   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.792679   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.792781   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:18.793116   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:18.793145   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.793152   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.793170   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.794612   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:18.794620   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.794625   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.794628   13103 round_trippers.go:580]     Audit-Id: 1a6da25b-f653-4360-b94d-81192052ff13
	I0906 12:21:18.794632   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.794635   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.794640   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.794643   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.794851   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:19.290301   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:19.290337   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.290346   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.290351   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.292530   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:19.292543   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.292548   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.292551   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.292554   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.292558   13103 round_trippers.go:580]     Audit-Id: 2f2fec6e-8368-4ebc-b6ab-4ad12cbf992b
	I0906 12:21:19.292561   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.292564   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.292633   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:19.292932   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:19.292939   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.292944   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.292948   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.294059   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:19.294068   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.294073   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.294077   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.294080   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.294082   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.294085   13103 round_trippers.go:580]     Audit-Id: ce182c3f-f63b-477f-9d1f-903a0e58563f
	I0906 12:21:19.294088   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.294240   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:19.792069   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:19.792088   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.792096   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.792100   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.794256   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:19.794269   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.794274   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.794278   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.794280   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.794282   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.794285   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.794287   13103 round_trippers.go:580]     Audit-Id: 004f896c-9063-4725-b97a-f4adea5fb1c5
	I0906 12:21:19.794494   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:19.794780   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:19.794787   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.794793   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.794796   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.798926   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:19.798937   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.798941   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.798944   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.798947   13103 round_trippers.go:580]     Audit-Id: 4c388aad-97a1-4855-90d9-b470c8d951ee
	I0906 12:21:19.798949   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.798951   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.798954   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.799557   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:20.290622   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:20.290645   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.290657   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.290663   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.293197   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:20.293213   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.293220   13103 round_trippers.go:580]     Audit-Id: fab7e316-5b57-43ad-81ee-16e332f18312
	I0906 12:21:20.293224   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.293228   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.293231   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.293236   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.293241   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.293329   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:20.293699   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:20.293708   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.293716   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.293723   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.294963   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:20.294973   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.294978   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.294985   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.294991   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.294995   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.295000   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.295003   13103 round_trippers.go:580]     Audit-Id: 772e3113-a0aa-49f0-90ea-85d876fbe1f2
	I0906 12:21:20.295067   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:20.295232   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:20.792125   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:20.792146   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.792158   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.792167   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.795233   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:20.795252   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.795260   13103 round_trippers.go:580]     Audit-Id: 615780de-f810-4f27-a16c-ab7c2e73713e
	I0906 12:21:20.795264   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.795269   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.795272   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.795276   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.795280   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.795665   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:20.796042   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:20.796052   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.796060   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.796081   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.797582   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:20.797591   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.797597   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.797601   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.797605   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.797608   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.797612   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.797617   13103 round_trippers.go:580]     Audit-Id: 91eba761-d083-4d31-84b5-7de10ea4f1fa
	I0906 12:21:20.798027   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:21.292039   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:21.292067   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.292079   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.292085   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.295020   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:21.295040   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.295051   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.295059   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.295074   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.295080   13103 round_trippers.go:580]     Audit-Id: da3b9516-e0b0-4030-8bb5-01eecf8f60f0
	I0906 12:21:21.295085   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.295090   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.295205   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:21.295588   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:21.295599   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.295606   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.295610   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.296951   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:21.296959   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.296964   13103 round_trippers.go:580]     Audit-Id: 4f65491b-a85f-457f-b4a6-9957d11b1b92
	I0906 12:21:21.296980   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.296987   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.296990   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.296992   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.296995   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.297061   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:21.790165   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:21.790183   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.790212   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.790223   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.792474   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:21.792486   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.792491   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.792495   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.792498   13103 round_trippers.go:580]     Audit-Id: e76b82d6-7a4c-491d-a0e1-ec55533b249e
	I0906 12:21:21.792501   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.792504   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.792507   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.792692   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:21.792977   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:21.792984   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.792989   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.792993   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.794028   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:21.794035   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.794040   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.794043   13103 round_trippers.go:580]     Audit-Id: a5239146-5aef-41b1-a558-92ec46d1ec96
	I0906 12:21:21.794046   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.794050   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.794053   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.794056   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.794392   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:22.291077   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:22.291095   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.291103   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.291109   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.293678   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:22.293690   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.293698   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.293702   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.293706   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.293712   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.293715   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.293720   13103 round_trippers.go:580]     Audit-Id: 830b9445-9e92-4e00-a756-44b08fd5b00f
	I0906 12:21:22.293841   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:22.294140   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:22.294148   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.294154   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.294157   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.295227   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:22.295237   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.295242   13103 round_trippers.go:580]     Audit-Id: d1cd6776-0a0d-4d08-a619-0d9c0f5c6498
	I0906 12:21:22.295259   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.295265   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.295268   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.295271   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.295275   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.295424   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:22.295600   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:22.792126   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:22.792148   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.792160   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.792166   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.795102   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:22.795118   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.795125   13103 round_trippers.go:580]     Audit-Id: 0d5e174a-14c0-414f-bd28-e23766377584
	I0906 12:21:22.795129   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.795132   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.795138   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.795144   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.795150   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.795330   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:22.795708   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:22.795718   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.795726   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.795731   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.796977   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:22.796984   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.796990   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.796995   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.796999   13103 round_trippers.go:580]     Audit-Id: c85d75ab-f62a-4f5f-b60d-c6982eb9e60b
	I0906 12:21:22.797002   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.797006   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.797009   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.797298   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:23.292087   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:23.292107   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.292119   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.292127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.294736   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:23.294752   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.294762   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.294768   13103 round_trippers.go:580]     Audit-Id: 844e43cd-1b1e-41c0-937a-9274b6eb3fb9
	I0906 12:21:23.294773   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.294778   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.294785   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.294790   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.295083   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:23.295380   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:23.295388   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.295394   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.295398   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.296737   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:23.296745   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.296749   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.296753   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.296756   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.296759   13103 round_trippers.go:580]     Audit-Id: e7b039bb-92fe-488d-81bd-ffa5a26d96a7
	I0906 12:21:23.296761   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.296764   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.296946   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:23.792162   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:23.792185   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.792197   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.792203   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.796761   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:23.796773   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.796778   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.796782   13103 round_trippers.go:580]     Audit-Id: 8a0d37c4-d5d4-4b34-a3f8-8ed244e5d4fd
	I0906 12:21:23.796785   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.796788   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.796791   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.796793   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.796925   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:23.797224   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:23.797232   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.797238   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.797242   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.799226   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:23.799235   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.799242   13103 round_trippers.go:580]     Audit-Id: 242f9ee1-d98d-4280-808b-a656a2b92498
	I0906 12:21:23.799247   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.799251   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.799255   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.799259   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.799269   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.799414   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.290082   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:24.290096   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.290102   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.290105   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.291823   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:24.291834   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.291840   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.291843   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.291845   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.291848   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.291851   13103 round_trippers.go:580]     Audit-Id: 6a99ab05-94e2-492b-8af0-b2da0016e5b7
	I0906 12:21:24.291854   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.291926   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:24.292215   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:24.292222   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.292228   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.292232   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.294868   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:24.294879   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.294887   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.294891   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.294895   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.294919   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.294928   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.294942   13103 round_trippers.go:580]     Audit-Id: af0ff70b-fd71-4aeb-b3de-4315f34facb9
	I0906 12:21:24.295045   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.792021   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:24.792045   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.792056   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.792061   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.795099   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:24.795111   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.795118   13103 round_trippers.go:580]     Audit-Id: 49cd14f6-b700-4079-acc6-1c23ea6665a8
	I0906 12:21:24.795121   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.795126   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.795129   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.795134   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.795138   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.795441   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:24.795829   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:24.795839   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.795847   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.795852   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.797049   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:24.797056   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.797062   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.797067   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.797071   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.797077   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.797081   13103 round_trippers.go:580]     Audit-Id: 372e796a-fd9b-4c7f-a4f3-348e4bb85f78
	I0906 12:21:24.797084   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.797314   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.797488   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:25.291260   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:25.291308   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.291321   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.291329   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.293814   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:25.293827   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.293837   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.293845   13103 round_trippers.go:580]     Audit-Id: 0913032e-8be9-417d-bb6c-c5369ea32b94
	I0906 12:21:25.293850   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.293855   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.293858   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.293861   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.294069   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:25.294442   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.294452   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.294460   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.294472   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.295729   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.295737   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.295742   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.295745   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.295749   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.295752   13103 round_trippers.go:580]     Audit-Id: f6de5899-967e-4335-937d-b862caacaac4
	I0906 12:21:25.295755   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.295757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.295934   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.790082   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:25.790110   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.790121   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.790127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.794252   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:25.794265   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.794270   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.794274   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.794276   13103 round_trippers.go:580]     Audit-Id: 84c94fcb-2090-4820-8371-d077f05523ae
	I0906 12:21:25.794279   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.794282   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.794285   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.794608   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0906 12:21:25.794917   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.794925   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.794930   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.794933   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.797962   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:25.797972   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.797977   13103 round_trippers.go:580]     Audit-Id: 0bad9809-253e-41f5-b043-fd2cc4b28671
	I0906 12:21:25.797981   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.797983   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.797986   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.797988   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.797991   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.798051   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.798227   13103 pod_ready.go:93] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.798236   13103 pod_ready.go:82] duration metric: took 14.008395399s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.798242   13103 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.798273   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-459000
	I0906 12:21:25.798278   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.798283   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.798287   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.799854   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.799863   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.799870   13103 round_trippers.go:580]     Audit-Id: 403b7e40-de6c-49d8-bd8c-3037daef8684
	I0906 12:21:25.799876   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.799887   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.799892   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.799896   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.799899   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.800134   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"896","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0906 12:21:25.800368   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.800374   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.800379   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.800382   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.801593   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.801602   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.801608   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.801612   13103 round_trippers.go:580]     Audit-Id: cfc71669-8af7-4367-87d9-6662789b2dae
	I0906 12:21:25.801614   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.801617   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.801621   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.801624   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.801765   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.801934   13103 pod_ready.go:93] pod "etcd-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.801942   13103 pod_ready.go:82] duration metric: took 3.694957ms for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.801952   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.801981   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-459000
	I0906 12:21:25.801986   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.801991   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.801996   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.802919   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.802927   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.802934   13103 round_trippers.go:580]     Audit-Id: 63d71678-c53e-4543-b6a5-d040eec32368
	I0906 12:21:25.802942   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.802946   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.802951   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.802955   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.802960   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.803115   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-459000","namespace":"kube-system","uid":"a7ee0531-75a6-405c-928c-1185a0e5ebd0","resourceVersion":"893","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.33:8443","kubernetes.io/config.hash":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.mirror":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.seen":"2024-09-06T19:16:52.157527221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0906 12:21:25.803342   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.803349   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.803355   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.803358   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.804246   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.804252   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.804256   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.804259   13103 round_trippers.go:580]     Audit-Id: 0ac897aa-9ea8-4691-969b-24565f1cec79
	I0906 12:21:25.804262   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.804264   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.804267   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.804270   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.804446   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.804600   13103 pod_ready.go:93] pod "kube-apiserver-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.804607   13103 pod_ready.go:82] duration metric: took 2.650187ms for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.804617   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.804642   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-459000
	I0906 12:21:25.804646   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.804652   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.804656   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.805698   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.805710   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.805719   13103 round_trippers.go:580]     Audit-Id: d2e6c5a5-f0ad-4b3f-bee2-a22972423cd2
	I0906 12:21:25.805725   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.805729   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.805733   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.805738   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.805741   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.805885   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-459000","namespace":"kube-system","uid":"ef9a4034-636f-4d52-b328-40aff0e03ccb","resourceVersion":"882","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.mirror":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.seen":"2024-09-06T19:16:52.157528036Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0906 12:21:25.806107   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.806114   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.806120   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.806124   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.807056   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.807066   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.807072   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.807075   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.807089   13103 round_trippers.go:580]     Audit-Id: 035907c2-91f4-4135-ba77-18e01d4e93aa
	I0906 12:21:25.807095   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.807098   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.807100   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.807202   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.807359   13103 pod_ready.go:93] pod "kube-controller-manager-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.807367   13103 pod_ready.go:82] duration metric: took 2.745265ms for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.807373   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.807399   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:21:25.807404   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.807410   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.807414   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.808305   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.808312   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.808316   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.808320   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.808323   13103 round_trippers.go:580]     Audit-Id: e1b9f127-6568-4885-a928-a313180b5cfc
	I0906 12:21:25.808326   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.808330   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.808333   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.808489   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crzpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"253c78d8-0d56-49e8-a00c-99218c50beac","resourceVersion":"505","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:21:25.808732   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:21:25.808739   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.808746   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.808749   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.809591   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.809599   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.809603   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.809608   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.809611   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.809613   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.809616   13103 round_trippers.go:580]     Audit-Id: 12243566-34b1-46b1-8e77-91a9e8c62dc1
	I0906 12:21:25.809625   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.809714   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m02","uid":"42483c05-2f0a-48b5-a783-4c5958284f86","resourceVersion":"573","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_17_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3818 chars]
	I0906 12:21:25.809852   13103 pod_ready.go:93] pod "kube-proxy-crzpl" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.809859   13103 pod_ready.go:82] duration metric: took 2.481658ms for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.809864   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.990651   13103 request.go:632] Waited for 180.750264ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:21:25.990733   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:21:25.990745   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.990758   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.990766   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.993181   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:25.993195   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.993203   13103 round_trippers.go:580]     Audit-Id: c00f3d33-4b56-4ae6-a5e8-81c5026b67c8
	I0906 12:21:25.993206   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.993211   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.993214   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.993219   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.993223   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:25.993303   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t24bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"626397be-3b5a-4dd4-8932-283e8edb0d27","resourceVersion":"878","creationTimestamp":"2024-09-06T19:16:56Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0906 12:21:26.191274   13103 request.go:632] Waited for 197.606599ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.191412   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.191429   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.191441   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.191450   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.194207   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.194222   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.194229   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.194233   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.194237   13103 round_trippers.go:580]     Audit-Id: 9fb32ea2-3741-4f49-bb50-a2d213c3ba43
	I0906 12:21:26.194241   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.194245   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.194250   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.194413   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:26.194661   13103 pod_ready.go:93] pod "kube-proxy-t24bs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.194674   13103 pod_ready.go:82] duration metric: took 384.805332ms for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.194683   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.390345   13103 request.go:632] Waited for 195.620855ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:21:26.390407   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:21:26.390414   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.390423   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.390449   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.392604   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.392621   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.392646   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.392655   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.392658   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.392660   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.392665   13103 round_trippers.go:580]     Audit-Id: 270f284c-7338-4efa-b17a-1a10c014da62
	I0906 12:21:26.392667   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.392768   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vqcpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6","resourceVersion":"735","creationTimestamp":"2024-09-06T19:18:30Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:21:26.591656   13103 request.go:632] Waited for 198.580484ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:21:26.591735   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:21:26.591747   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.591759   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.591766   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.594204   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.594217   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.594223   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.594227   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.594230   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.594232   13103 round_trippers.go:580]     Audit-Id: 2f715b53-17f6-46aa-a414-cdfa14512543
	I0906 12:21:26.594235   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.594238   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.594320   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m03","uid":"6c54d256-cf96-4ec0-9d0b-36c85c77ef2b","resourceVersion":"760","creationTimestamp":"2024-09-06T19:19:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_19_25_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:19:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3635 chars]
	I0906 12:21:26.594506   13103 pod_ready.go:93] pod "kube-proxy-vqcpj" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.594515   13103 pod_ready.go:82] duration metric: took 399.828258ms for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.594522   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.792106   13103 request.go:632] Waited for 197.521385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:21:26.792145   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:21:26.792151   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.792159   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.792164   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.794274   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.794287   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.794292   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.794295   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.794297   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.794300   13103 round_trippers.go:580]     Audit-Id: a0b7ecef-315a-4ee1-b32f-542e84989097
	I0906 12:21:26.794310   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.794325   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.794421   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-459000","namespace":"kube-system","uid":"4602221a-c2e8-4f7d-a31e-2910196cb32b","resourceVersion":"887","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.mirror":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.seen":"2024-09-06T19:16:46.929338017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0906 12:21:26.990594   13103 request.go:632] Waited for 195.896372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.990633   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.990639   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.990649   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.990656   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.992802   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.992815   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.992820   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.992824   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.992827   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:26.992829   13103 round_trippers.go:580]     Audit-Id: 4800a535-951a-44f6-b035-5009b5db7c8d
	I0906 12:21:26.992832   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.992836   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.993044   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:26.993239   13103 pod_ready.go:93] pod "kube-scheduler-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.993248   13103 pod_ready.go:82] duration metric: took 398.723382ms for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.993255   13103 pod_ready.go:39] duration metric: took 15.207711162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:21:26.993267   13103 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:21:26.993321   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:21:27.005121   13103 command_runner.go:130] > 1692
	I0906 12:21:27.005342   13103 api_server.go:72] duration metric: took 31.982233194s to wait for apiserver process to appear ...
	I0906 12:21:27.005350   13103 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:21:27.005359   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:21:27.008362   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:21:27.008393   13103 round_trippers.go:463] GET https://192.169.0.33:8443/version
	I0906 12:21:27.008397   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.008403   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.008406   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.008898   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:27.008905   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.008910   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.008915   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.008919   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.008922   13103 round_trippers.go:580]     Content-Length: 263
	I0906 12:21:27.008927   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.008941   13103 round_trippers.go:580]     Audit-Id: afc79679-e8c5-4a0a-b383-34d3dd5cf866
	I0906 12:21:27.008945   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.008953   13103 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 12:21:27.008973   13103 api_server.go:141] control plane version: v1.31.0
	I0906 12:21:27.008981   13103 api_server.go:131] duration metric: took 3.627345ms to wait for apiserver health ...
	I0906 12:21:27.008986   13103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:21:27.192136   13103 request.go:632] Waited for 183.091553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.192271   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.192278   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.192286   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.192292   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.195706   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:27.195721   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.195729   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.195733   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.195738   13103 round_trippers.go:580]     Audit-Id: af203598-9027-4608-960f-5efe9b85e522
	I0906 12:21:27.195741   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.195757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.195761   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.196938   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89323 chars]
	I0906 12:21:27.198936   13103 system_pods.go:59] 12 kube-system pods found
	I0906 12:21:27.198946   13103 system_pods.go:61] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running
	I0906 12:21:27.198950   13103 system_pods.go:61] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running
	I0906 12:21:27.198953   13103 system_pods.go:61] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:21:27.198957   13103 system_pods.go:61] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:21:27.198959   13103 system_pods.go:61] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:21:27.198962   13103 system_pods.go:61] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running
	I0906 12:21:27.198968   13103 system_pods.go:61] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running
	I0906 12:21:27.198970   13103 system_pods.go:61] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:21:27.198973   13103 system_pods.go:61] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:21:27.198975   13103 system_pods.go:61] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:21:27.198978   13103 system_pods.go:61] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running
	I0906 12:21:27.198982   13103 system_pods.go:61] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:21:27.198989   13103 system_pods.go:74] duration metric: took 189.999782ms to wait for pod list to return data ...
	I0906 12:21:27.198995   13103 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:21:27.390207   13103 request.go:632] Waited for 191.164821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:21:27.390245   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:21:27.390252   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.390260   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.390264   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.392029   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:27.392044   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.392049   13103 round_trippers.go:580]     Audit-Id: 2fbbe1f8-a5e2-419a-8fe6-1b6b60c2c579
	I0906 12:21:27.392053   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.392056   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.392058   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.392061   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.392063   13103 round_trippers.go:580]     Content-Length: 261
	I0906 12:21:27.392066   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.392086   13103 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2b97d238-fe0f-46a4-b550-296f608e88e4","resourceVersion":"351","creationTimestamp":"2024-09-06T19:16:57Z"}}]}
	I0906 12:21:27.392202   13103 default_sa.go:45] found service account: "default"
	I0906 12:21:27.392211   13103 default_sa.go:55] duration metric: took 193.2122ms for default service account to be created ...
	I0906 12:21:27.392219   13103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:21:27.592153   13103 request.go:632] Waited for 199.860611ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.592227   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.592245   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.592256   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.592265   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.595123   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:27.595136   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.595143   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.595153   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.595157   13103 round_trippers.go:580]     Audit-Id: bffb0aa4-39bf-41e6-9363-65a6d47aff42
	I0906 12:21:27.595160   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.595164   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.595168   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.596227   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89323 chars]
	I0906 12:21:27.598193   13103 system_pods.go:86] 12 kube-system pods found
	I0906 12:21:27.598204   13103 system_pods.go:89] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running
	I0906 12:21:27.598208   13103 system_pods.go:89] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running
	I0906 12:21:27.598211   13103 system_pods.go:89] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:21:27.598214   13103 system_pods.go:89] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:21:27.598216   13103 system_pods.go:89] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:21:27.598220   13103 system_pods.go:89] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running
	I0906 12:21:27.598224   13103 system_pods.go:89] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running
	I0906 12:21:27.598227   13103 system_pods.go:89] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:21:27.598229   13103 system_pods.go:89] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:21:27.598236   13103 system_pods.go:89] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:21:27.598239   13103 system_pods.go:89] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running
	I0906 12:21:27.598243   13103 system_pods.go:89] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:21:27.598250   13103 system_pods.go:126] duration metric: took 206.027101ms to wait for k8s-apps to be running ...
	I0906 12:21:27.598262   13103 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:21:27.598315   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:21:27.609404   13103 system_svc.go:56] duration metric: took 11.137288ms WaitForService to wait for kubelet
	I0906 12:21:27.609422   13103 kubeadm.go:582] duration metric: took 32.586314845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:21:27.609435   13103 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:21:27.791184   13103 request.go:632] Waited for 181.707048ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes
	I0906 12:21:27.791256   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes
	I0906 12:21:27.791270   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.791280   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.791284   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.798698   13103 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:21:27.798713   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.798721   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.798725   13103 round_trippers.go:580]     Audit-Id: 367702bd-19ff-4848-9862-dc41de16b578
	I0906 12:21:27.798729   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.798735   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.798739   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.798744   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.798923   13103 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14655 chars]
	I0906 12:21:27.799352   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799364   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799371   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799374   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799377   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799381   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799384   13103 node_conditions.go:105] duration metric: took 189.944138ms to run NodePressure ...
	I0906 12:21:27.799392   13103 start.go:241] waiting for startup goroutines ...
	I0906 12:21:27.799399   13103 start.go:246] waiting for cluster config update ...
	I0906 12:21:27.799404   13103 start.go:255] writing updated cluster config ...
	I0906 12:21:27.821252   13103 out.go:201] 
	I0906 12:21:27.843093   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:27.843181   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:27.864582   13103 out.go:177] * Starting "multinode-459000-m02" worker node in "multinode-459000" cluster
	I0906 12:21:27.906824   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:21:27.906848   13103 cache.go:56] Caching tarball of preloaded images
	I0906 12:21:27.906988   13103 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:21:27.907000   13103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:21:27.907095   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:27.907830   13103 start.go:360] acquireMachinesLock for multinode-459000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:27.907909   13103 start.go:364] duration metric: took 62.547µs to acquireMachinesLock for "multinode-459000-m02"
	I0906 12:21:27.907926   13103 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:27.907932   13103 fix.go:54] fixHost starting: m02
	I0906 12:21:27.908283   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:21:27.908299   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:21:27.917825   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57515
	I0906 12:21:27.918176   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:21:27.918549   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:21:27.918566   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:21:27.918784   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:21:27.918904   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:21:27.918992   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetState
	I0906 12:21:27.919074   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:27.919163   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 12773
	I0906 12:21:27.920087   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 12773 missing from process table
	I0906 12:21:27.920111   13103 fix.go:112] recreateIfNeeded on multinode-459000-m02: state=Stopped err=<nil>
	I0906 12:21:27.920123   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	W0906 12:21:27.920203   13103 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:27.942601   13103 out.go:177] * Restarting existing hyperkit VM for "multinode-459000-m02" ...
	I0906 12:21:27.984774   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .Start
	I0906 12:21:27.984975   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:27.985004   13103 main.go:141] libmachine: (multinode-459000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid
	I0906 12:21:27.986238   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 12773 missing from process table
	I0906 12:21:27.986246   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | pid 12773 is in state "Stopped"
	I0906 12:21:27.986260   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid...
	I0906 12:21:27.986559   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Using UUID 656fac0c-2257-4452-9309-51b4437053c1
	I0906 12:21:28.010616   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Generated MAC fe:64:cc:9a:2e:14
	I0906 12:21:28.010637   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000
	I0906 12:21:28.010773   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"656fac0c-2257-4452-9309-51b4437053c1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0906 12:21:28.010802   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"656fac0c-2257-4452-9309-51b4437053c1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0906 12:21:28.010862   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "656fac0c-2257-4452-9309-51b4437053c1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/multinode-459000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage,/Users/j
enkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"}
	I0906 12:21:28.010908   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 656fac0c-2257-4452-9309-51b4437053c1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/multinode-459000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/mult
inode-459000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"
	I0906 12:21:28.010922   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:21:28.012308   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Pid is 13138
	I0906 12:21:28.012836   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Attempt 0
	I0906 12:21:28.012847   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:28.012954   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 13138
	I0906 12:21:28.014959   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Searching for fe:64:cc:9a:2e:14 in /var/db/dhcpd_leases ...
	I0906 12:21:28.015045   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0906 12:21:28.015075   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca76e}
	I0906 12:21:28.015090   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:21:28.015098   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca6c9}
	I0906 12:21:28.015104   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Found match: fe:64:cc:9a:2e:14
	I0906 12:21:28.015122   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | IP: 192.169.0.34
	I0906 12:21:28.015211   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetConfigRaw
	I0906 12:21:28.015984   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:21:28.016200   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:28.016694   13103 machine.go:93] provisionDockerMachine start ...
	I0906 12:21:28.016705   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:21:28.016832   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:21:28.016942   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:21:28.017045   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:21:28.017163   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:21:28.017252   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:21:28.017405   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:21:28.017574   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:21:28.017581   13103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:21:28.020425   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:21:28.028631   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:21:28.029659   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:21:28.029679   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:21:28.029689   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:21:28.029703   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:21:28.418268   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:21:28.418289   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:21:28.532958   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:21:28.532980   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:21:28.532991   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:21:28.533007   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:21:28.533853   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:21:28.533862   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:21:34.182409   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:21:34.182422   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:21:34.182441   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:21:34.205614   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:22:03.080676   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:22:03.080691   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.080823   13103 buildroot.go:166] provisioning hostname "multinode-459000-m02"
	I0906 12:22:03.080835   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.080941   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.081027   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.081123   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.081198   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.081290   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.081435   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.081584   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.081600   13103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-459000-m02 && echo "multinode-459000-m02" | sudo tee /etc/hostname
	I0906 12:22:03.147432   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-459000-m02
	
	I0906 12:22:03.147447   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.147580   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.147686   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.147777   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.147882   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.148030   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.148181   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.148193   13103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-459000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-459000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-459000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:22:03.210956   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:22:03.210971   13103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:22:03.210983   13103 buildroot.go:174] setting up certificates
	I0906 12:22:03.210989   13103 provision.go:84] configureAuth start
	I0906 12:22:03.210996   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.211127   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:22:03.211230   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.211317   13103 provision.go:143] copyHostCerts
	I0906 12:22:03.211342   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:22:03.211388   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:22:03.211393   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:22:03.211527   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:22:03.211723   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:22:03.211752   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:22:03.211757   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:22:03.211879   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:22:03.212065   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:22:03.212095   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:22:03.212100   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:22:03.212185   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:22:03.212343   13103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.multinode-459000-m02 san=[127.0.0.1 192.169.0.34 localhost minikube multinode-459000-m02]
	I0906 12:22:03.292544   13103 provision.go:177] copyRemoteCerts
	I0906 12:22:03.292595   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:22:03.292609   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.292765   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.292872   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.292982   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.293071   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:03.328230   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:22:03.328298   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:22:03.348053   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:22:03.348131   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0906 12:22:03.367639   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:22:03.367712   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:22:03.387332   13103 provision.go:87] duration metric: took 176.33502ms to configureAuth
	I0906 12:22:03.387347   13103 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:22:03.387513   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:03.387530   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:03.387682   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.387763   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.387851   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.387925   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.388009   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.388123   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.388249   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.388257   13103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:22:03.443432   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:22:03.443443   13103 buildroot.go:70] root file system type: tmpfs
	I0906 12:22:03.443517   13103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:22:03.443528   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.443676   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.443804   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.443902   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.443992   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.444119   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.444251   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.444297   13103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.33"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:22:03.511777   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.33
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:22:03.511796   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.511939   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.512046   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.512150   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.512229   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.512369   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.512523   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.512537   13103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:22:05.101095   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:22:05.101110   13103 machine.go:96] duration metric: took 37.084578612s to provisionDockerMachine
	I0906 12:22:05.101117   13103 start.go:293] postStartSetup for "multinode-459000-m02" (driver="hyperkit")
	I0906 12:22:05.101128   13103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:22:05.101143   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.101326   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:22:05.101340   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.101444   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.101546   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.101646   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.101727   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.136158   13103 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:22:05.139064   13103 command_runner.go:130] > NAME=Buildroot
	I0906 12:22:05.139075   13103 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 12:22:05.139080   13103 command_runner.go:130] > ID=buildroot
	I0906 12:22:05.139085   13103 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 12:22:05.139091   13103 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 12:22:05.139245   13103 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:22:05.139254   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:22:05.139354   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:22:05.139523   13103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:22:05.139532   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:22:05.139729   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:22:05.147744   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:22:05.167085   13103 start.go:296] duration metric: took 65.96042ms for postStartSetup
	I0906 12:22:05.167104   13103 fix.go:56] duration metric: took 37.259343707s for fixHost
	I0906 12:22:05.167120   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.167254   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.167358   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.167446   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.167521   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.167651   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:05.167820   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:05.167828   13103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:22:05.223842   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650525.359228063
	
	I0906 12:22:05.223853   13103 fix.go:216] guest clock: 1725650525.359228063
	I0906 12:22:05.223859   13103 fix.go:229] Guest: 2024-09-06 12:22:05.359228063 -0700 PDT Remote: 2024-09-06 12:22:05.16711 -0700 PDT m=+120.857961279 (delta=192.118063ms)
	I0906 12:22:05.223869   13103 fix.go:200] guest clock delta is within tolerance: 192.118063ms
	I0906 12:22:05.223874   13103 start.go:83] releasing machines lock for "multinode-459000-m02", held for 37.316129214s
	I0906 12:22:05.223892   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.224018   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:22:05.247126   13103 out.go:177] * Found network options:
	I0906 12:22:05.267149   13103 out.go:177]   - NO_PROXY=192.169.0.33
	W0906 12:22:05.288480   13103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:22:05.288517   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289464   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289709   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289822   13103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:22:05.289870   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	W0906 12:22:05.289953   13103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:22:05.290045   13103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:22:05.290049   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.290072   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.290260   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.290309   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.290487   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.290522   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.290612   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.290641   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.290732   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.322318   13103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 12:22:05.322403   13103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:22:05.322457   13103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:22:05.371219   13103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 12:22:05.371281   13103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0906 12:22:05.371302   13103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:22:05.371309   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:22:05.371372   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:22:05.386255   13103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0906 12:22:05.386586   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:22:05.395028   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:22:05.403351   13103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:22:05.403403   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:22:05.411931   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:22:05.420232   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:22:05.428446   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:22:05.436920   13103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:22:05.445773   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:22:05.453982   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:22:05.462364   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:22:05.470872   13103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:22:05.478456   13103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 12:22:05.478577   13103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:22:05.486053   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:22:05.577721   13103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:22:05.597370   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:22:05.597442   13103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:22:05.616652   13103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0906 12:22:05.617149   13103 command_runner.go:130] > [Unit]
	I0906 12:22:05.617160   13103 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 12:22:05.617165   13103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 12:22:05.617170   13103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0906 12:22:05.617176   13103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0906 12:22:05.617186   13103 command_runner.go:130] > StartLimitBurst=3
	I0906 12:22:05.617191   13103 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 12:22:05.617195   13103 command_runner.go:130] > [Service]
	I0906 12:22:05.617202   13103 command_runner.go:130] > Type=notify
	I0906 12:22:05.617206   13103 command_runner.go:130] > Restart=on-failure
	I0906 12:22:05.617209   13103 command_runner.go:130] > Environment=NO_PROXY=192.169.0.33
	I0906 12:22:05.617215   13103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 12:22:05.617224   13103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 12:22:05.617230   13103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 12:22:05.617236   13103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 12:22:05.617242   13103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 12:22:05.617248   13103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 12:22:05.617254   13103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 12:22:05.617263   13103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 12:22:05.617271   13103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 12:22:05.617274   13103 command_runner.go:130] > ExecStart=
	I0906 12:22:05.617286   13103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0906 12:22:05.617291   13103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 12:22:05.617298   13103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 12:22:05.617304   13103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 12:22:05.617308   13103 command_runner.go:130] > LimitNOFILE=infinity
	I0906 12:22:05.617312   13103 command_runner.go:130] > LimitNPROC=infinity
	I0906 12:22:05.617315   13103 command_runner.go:130] > LimitCORE=infinity
	I0906 12:22:05.617321   13103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 12:22:05.617325   13103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 12:22:05.617329   13103 command_runner.go:130] > TasksMax=infinity
	I0906 12:22:05.617332   13103 command_runner.go:130] > TimeoutStartSec=0
	I0906 12:22:05.617338   13103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 12:22:05.617341   13103 command_runner.go:130] > Delegate=yes
	I0906 12:22:05.617346   13103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 12:22:05.617354   13103 command_runner.go:130] > KillMode=process
	I0906 12:22:05.617358   13103 command_runner.go:130] > [Install]
	I0906 12:22:05.617361   13103 command_runner.go:130] > WantedBy=multi-user.target
	I0906 12:22:05.617421   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:22:05.628871   13103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:22:05.647873   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:22:05.659524   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:22:05.669927   13103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:22:05.694232   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:22:05.704881   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:22:05.719722   13103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 12:22:05.719995   13103 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:22:05.722778   13103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0906 12:22:05.722977   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:22:05.730138   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:22:05.743763   13103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:22:05.836175   13103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:22:05.941964   13103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:22:05.941990   13103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:22:05.956052   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:22:06.050692   13103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:23:07.093245   13103 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0906 12:23:07.093261   13103 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0906 12:23:07.093271   13103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.026654808s)
	I0906 12:23:07.093333   13103 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:23:07.102433   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0906 12:23:07.102446   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	I0906 12:23:07.102458   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	I0906 12:23:07.102471   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	I0906 12:23:07.102483   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	I0906 12:23:07.102493   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0906 12:23:07.102506   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0906 12:23:07.102517   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0906 12:23:07.102526   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102536   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102546   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102564   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102577   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102587   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102597   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102606   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102615   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102630   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102641   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102662   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102671   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0906 12:23:07.102680   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0906 12:23:07.102689   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	I0906 12:23:07.102699   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0906 12:23:07.102713   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0906 12:23:07.102723   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0906 12:23:07.102733   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0906 12:23:07.102743   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0906 12:23:07.102753   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0906 12:23:07.102762   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0906 12:23:07.102771   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0906 12:23:07.102780   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0906 12:23:07.102790   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0906 12:23:07.102799   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102811   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102821   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102831   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102842   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102859   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102892   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102903   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102913   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102921   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102930   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102938   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102947   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102956   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102964   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102973   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102982   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102991   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103001   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103009   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103018   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103027   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0906 12:23:07.103036   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103044   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103053   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0906 12:23:07.103063   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0906 12:23:07.103074   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0906 12:23:07.103088   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0906 12:23:07.103181   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0906 12:23:07.103192   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103203   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0906 12:23:07.103210   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	I0906 12:23:07.103218   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0906 12:23:07.103226   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0906 12:23:07.103234   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0906 12:23:07.103242   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	I0906 12:23:07.103250   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0906 12:23:07.103257   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	I0906 12:23:07.103277   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0906 12:23:07.103288   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0906 12:23:07.103301   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	I0906 12:23:07.103309   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	I0906 12:23:07.103319   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	I0906 12:23:07.103325   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	I0906 12:23:07.103332   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	I0906 12:23:07.103338   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	I0906 12:23:07.103345   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	I0906 12:23:07.103352   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	I0906 12:23:07.103361   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0906 12:23:07.103370   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	I0906 12:23:07.103379   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0906 12:23:07.103415   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0906 12:23:07.103423   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0906 12:23:07.103428   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0906 12:23:07.103433   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0906 12:23:07.103439   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0906 12:23:07.103445   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	I0906 12:23:07.103453   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0906 12:23:07.103461   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0906 12:23:07.103467   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0906 12:23:07.103473   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0906 12:23:07.127876   13103 out.go:201] 
	W0906 12:23:07.148646   13103 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:23:07.148721   13103 out.go:270] * 
	* 
	W0906 12:23:07.149793   13103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:07.211707   13103 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-459000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-459000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-459000 -n multinode-459000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-459000 logs -n 25: (2.734068443s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000:/home/docker/cp-test_multinode-459000-m02_multinode-459000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000 sudo cat                                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /home/docker/cp-test_multinode-459000-m02_multinode-459000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03:/home/docker/cp-test_multinode-459000-m02_multinode-459000-m03.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000-m03 sudo cat                                                                      | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /home/docker/cp-test_multinode-459000-m02_multinode-459000-m03.txt                                                         |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp testdata/cp-test.txt                                                                                   | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000:/home/docker/cp-test_multinode-459000-m03_multinode-459000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000 sudo cat                                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | /home/docker/cp-test_multinode-459000-m03_multinode-459000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000-m02:/home/docker/cp-test_multinode-459000-m03_multinode-459000-m02.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000-m02 sudo cat                                                                      | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | /home/docker/cp-test_multinode-459000-m03_multinode-459000-m02.txt                                                         |                  |         |         |                     |                     |
	| node    | multinode-459000 node stop m03                                                                                             | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	| node    | multinode-459000 node start                                                                                                | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                 |                  |         |         |                     |                     |
	| node    | list -p multinode-459000                                                                                                   | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT |                     |
	| stop    | -p multinode-459000                                                                                                        | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:20 PDT |
	| start   | -p multinode-459000                                                                                                        | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:20 PDT |                     |
	|         | --wait=true -v=8                                                                                                           |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                          |                  |         |         |                     |                     |
	| node    | list -p multinode-459000                                                                                                   | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:23 PDT |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:20:04
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:20:04.345863   13103 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:04.346053   13103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:04.346060   13103 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:04.346064   13103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:04.346235   13103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:20:04.347624   13103 out.go:352] Setting JSON to false
	I0906 12:20:04.372597   13103 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11975,"bootTime":1725638429,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:20:04.372699   13103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:20:04.394472   13103 out.go:177] * [multinode-459000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:20:04.436211   13103 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:20:04.436276   13103 notify.go:220] Checking for updates...
	I0906 12:20:04.478971   13103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:04.499819   13103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:20:04.521129   13103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:20:04.542343   13103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:20:04.563008   13103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:20:04.584955   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:04.585128   13103 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:20:04.585775   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.585861   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:20:04.595482   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57485
	I0906 12:20:04.595845   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:20:04.596336   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:20:04.596353   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:20:04.596616   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:20:04.596748   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.625251   13103 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:20:04.667227   13103 start.go:297] selected driver: hyperkit
	I0906 12:20:04.667254   13103 start.go:901] validating driver "hyperkit" against &{Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-4590
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:04.667526   13103 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:20:04.667707   13103 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:20:04.667925   13103 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:20:04.677596   13103 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:20:04.681720   13103 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.681741   13103 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:20:04.684904   13103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:20:04.684944   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:04.684957   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:04.685037   13103 start.go:340] cluster config:
	{Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logview
er:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:04.685143   13103 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:20:04.727037   13103 out.go:177] * Starting "multinode-459000" primary control-plane node in "multinode-459000" cluster
	I0906 12:20:04.748083   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:20:04.748146   13103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:20:04.748175   13103 cache.go:56] Caching tarball of preloaded images
	I0906 12:20:04.748360   13103 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:20:04.748383   13103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:20:04.748522   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:20:04.749240   13103 start.go:360] acquireMachinesLock for multinode-459000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:20:04.749328   13103 start.go:364] duration metric: took 55.823µs to acquireMachinesLock for "multinode-459000"
	I0906 12:20:04.749345   13103 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:20:04.749357   13103 fix.go:54] fixHost starting: 
	I0906 12:20:04.749579   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.749598   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:20:04.758425   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57487
	I0906 12:20:04.758777   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:20:04.759147   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:20:04.759162   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:20:04.759382   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:20:04.759508   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.759613   13103 main.go:141] libmachine: (multinode-459000) Calling .GetState
	I0906 12:20:04.759719   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.759791   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 12754
	I0906 12:20:04.760733   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 12754 missing from process table
	I0906 12:20:04.760763   13103 fix.go:112] recreateIfNeeded on multinode-459000: state=Stopped err=<nil>
	I0906 12:20:04.760785   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	W0906 12:20:04.760890   13103 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:20:04.802907   13103 out.go:177] * Restarting existing hyperkit VM for "multinode-459000" ...
	I0906 12:20:04.824117   13103 main.go:141] libmachine: (multinode-459000) Calling .Start
	I0906 12:20:04.824350   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.824408   13103 main.go:141] libmachine: (multinode-459000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid
	I0906 12:20:04.826541   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 12754 missing from process table
	I0906 12:20:04.826557   13103 main.go:141] libmachine: (multinode-459000) DBG | pid 12754 is in state "Stopped"
	I0906 12:20:04.826571   13103 main.go:141] libmachine: (multinode-459000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid...
	I0906 12:20:04.827002   13103 main.go:141] libmachine: (multinode-459000) DBG | Using UUID 01eb6722-41be-4f7c-b53d-2237e8e3c176
	I0906 12:20:04.935555   13103 main.go:141] libmachine: (multinode-459000) DBG | Generated MAC 3a:dc:bb:38:e3:28
	I0906 12:20:04.935584   13103 main.go:141] libmachine: (multinode-459000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000
	I0906 12:20:04.935690   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"01eb6722-41be-4f7c-b53d-2237e8e3c176", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0906 12:20:04.935723   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"01eb6722-41be-4f7c-b53d-2237e8e3c176", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0906 12:20:04.935758   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "01eb6722-41be-4f7c-b53d-2237e8e3c176", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/multinode-459000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage,/Users/jenkins/minikube-integration/1957
6-7784/.minikube/machines/multinode-459000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"}
	I0906 12:20:04.935794   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 01eb6722-41be-4f7c-b53d-2237e8e3c176 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/multinode-459000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"
	I0906 12:20:04.935811   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:20:04.937295   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Pid is 13116
	I0906 12:20:04.937708   13103 main.go:141] libmachine: (multinode-459000) DBG | Attempt 0
	I0906 12:20:04.937719   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.937806   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 13116
	I0906 12:20:04.939357   13103 main.go:141] libmachine: (multinode-459000) DBG | Searching for 3a:dc:bb:38:e3:28 in /var/db/dhcpd_leases ...
	I0906 12:20:04.939446   13103 main.go:141] libmachine: (multinode-459000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0906 12:20:04.939476   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:20:04.939495   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca6c9}
	I0906 12:20:04.939523   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca68b}
	I0906 12:20:04.939530   13103 main.go:141] libmachine: (multinode-459000) DBG | Found match: 3a:dc:bb:38:e3:28
	I0906 12:20:04.939550   13103 main.go:141] libmachine: (multinode-459000) DBG | IP: 192.169.0.33
	I0906 12:20:04.939615   13103 main.go:141] libmachine: (multinode-459000) Calling .GetConfigRaw
	I0906 12:20:04.940318   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:04.940491   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:20:04.940980   13103 machine.go:93] provisionDockerMachine start ...
	I0906 12:20:04.940993   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.941161   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:04.941289   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:04.941397   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:04.941519   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:04.941644   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:04.941784   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:04.941989   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:04.941997   13103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:20:04.945527   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:20:04.997276   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:20:04.997987   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:20:04.998001   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:20:04.998009   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:20:04.998017   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:20:05.390023   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:20:05.390038   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:20:05.504740   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:20:05.504761   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:20:05.504773   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:20:05.504793   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:20:05.505682   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:20:05.505706   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:20:11.126600   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:20:11.126629   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:20:11.126642   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:20:11.150792   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:20:40.017036   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:20:40.017050   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.017188   13103 buildroot.go:166] provisioning hostname "multinode-459000"
	I0906 12:20:40.017198   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.017332   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.017423   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.017512   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.017602   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.017716   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.017845   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.017999   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.018007   13103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-459000 && echo "multinode-459000" | sudo tee /etc/hostname
	I0906 12:20:40.096089   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-459000
	
	I0906 12:20:40.096107   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.096242   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.096342   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.096426   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.096502   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.096618   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.096770   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.096781   13103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-459000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-459000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-459000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:20:40.169206   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:20:40.169225   13103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:20:40.169241   13103 buildroot.go:174] setting up certificates
	I0906 12:20:40.169250   13103 provision.go:84] configureAuth start
	I0906 12:20:40.169257   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.169406   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:40.169492   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.169576   13103 provision.go:143] copyHostCerts
	I0906 12:20:40.169605   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:20:40.169676   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:20:40.169683   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:20:40.170064   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:20:40.170273   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:20:40.170315   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:20:40.170320   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:20:40.170402   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:20:40.170550   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:20:40.170592   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:20:40.170597   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:20:40.170676   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:20:40.170820   13103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.multinode-459000 san=[127.0.0.1 192.169.0.33 localhost minikube multinode-459000]
	I0906 12:20:40.232666   13103 provision.go:177] copyRemoteCerts
	I0906 12:20:40.232717   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:20:40.232731   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.232854   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.232974   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.233068   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.233156   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:40.274812   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:20:40.274888   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:20:40.293995   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:20:40.294068   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:20:40.313187   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:20:40.313258   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 12:20:40.332207   13103 provision.go:87] duration metric: took 162.943562ms to configureAuth
	I0906 12:20:40.332219   13103 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:20:40.332387   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:40.332402   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:40.332534   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.332628   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.332709   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.332780   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.332850   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.332965   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.333093   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.333100   13103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:20:40.400358   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:20:40.400369   13103 buildroot.go:70] root file system type: tmpfs
	I0906 12:20:40.400464   13103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:20:40.400477   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.400616   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.400716   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.400806   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.400897   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.401035   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.401181   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.401224   13103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:20:40.478937   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:20:40.478956   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.479091   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.479178   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.479269   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.479347   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.479476   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.479629   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.479640   13103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:20:42.127114   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:20:42.127129   13103 machine.go:96] duration metric: took 37.186310804s to provisionDockerMachine
	I0906 12:20:42.127143   13103 start.go:293] postStartSetup for "multinode-459000" (driver="hyperkit")
	I0906 12:20:42.127150   13103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:20:42.127165   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.127347   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:20:42.127361   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.127444   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.127542   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.127636   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.127724   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.166901   13103 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:20:42.169868   13103 command_runner.go:130] > NAME=Buildroot
	I0906 12:20:42.169887   13103 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 12:20:42.169893   13103 command_runner.go:130] > ID=buildroot
	I0906 12:20:42.169899   13103 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 12:20:42.169908   13103 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 12:20:42.170001   13103 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:20:42.170014   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:20:42.170122   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:20:42.170312   13103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:20:42.170318   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:20:42.170518   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:20:42.178002   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:20:42.197689   13103 start.go:296] duration metric: took 70.537804ms for postStartSetup
	I0906 12:20:42.197709   13103 fix.go:56] duration metric: took 37.448529222s for fixHost
	I0906 12:20:42.197720   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.197863   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.197977   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.198074   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.198146   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.198279   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:42.198417   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:42.198424   13103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:20:42.262511   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650442.394217988
	
	I0906 12:20:42.262522   13103 fix.go:216] guest clock: 1725650442.394217988
	I0906 12:20:42.262528   13103 fix.go:229] Guest: 2024-09-06 12:20:42.394217988 -0700 PDT Remote: 2024-09-06 12:20:42.197712 -0700 PDT m=+37.888180409 (delta=196.505988ms)
	I0906 12:20:42.262551   13103 fix.go:200] guest clock delta is within tolerance: 196.505988ms
	I0906 12:20:42.262555   13103 start.go:83] releasing machines lock for "multinode-459000", held for 37.513393533s
	I0906 12:20:42.262575   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.262704   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:42.262819   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263209   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263322   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263421   13103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:20:42.263463   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.263466   13103 ssh_runner.go:195] Run: cat /version.json
	I0906 12:20:42.263476   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.263583   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.263606   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.263691   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.263709   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.263807   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.263822   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.263897   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.263913   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.349749   13103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 12:20:42.349790   13103 command_runner.go:130] > {"iso_version": "v1.34.0", "kicbase_version": "v0.0.44-1724862063-19530", "minikube_version": "v1.34.0", "commit": "613a681f9f90c87e637792fcb55bc4d32fe5c29c"}
	I0906 12:20:42.349946   13103 ssh_runner.go:195] Run: systemctl --version
	I0906 12:20:42.354330   13103 command_runner.go:130] > systemd 252 (252)
	I0906 12:20:42.354353   13103 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0906 12:20:42.354539   13103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:20:42.358516   13103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 12:20:42.358541   13103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:20:42.358584   13103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:20:42.371660   13103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0906 12:20:42.371693   13103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:20:42.371706   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:20:42.371808   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:20:42.386518   13103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0906 12:20:42.386805   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:20:42.395515   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:20:42.404507   13103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:20:42.404553   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:20:42.413199   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:20:42.422017   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:20:42.430768   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:20:42.439534   13103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:20:42.448644   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:20:42.457341   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:20:42.465857   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:20:42.474621   13103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:20:42.482317   13103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 12:20:42.482490   13103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:20:42.490267   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:42.589095   13103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:20:42.608272   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:20:42.608350   13103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:20:42.622568   13103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0906 12:20:42.622694   13103 command_runner.go:130] > [Unit]
	I0906 12:20:42.622704   13103 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 12:20:42.622712   13103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 12:20:42.622718   13103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0906 12:20:42.622723   13103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0906 12:20:42.622727   13103 command_runner.go:130] > StartLimitBurst=3
	I0906 12:20:42.622731   13103 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 12:20:42.622734   13103 command_runner.go:130] > [Service]
	I0906 12:20:42.622737   13103 command_runner.go:130] > Type=notify
	I0906 12:20:42.622740   13103 command_runner.go:130] > Restart=on-failure
	I0906 12:20:42.622747   13103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 12:20:42.622754   13103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 12:20:42.622761   13103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 12:20:42.622766   13103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 12:20:42.622771   13103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 12:20:42.622777   13103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 12:20:42.622784   13103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 12:20:42.622791   13103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 12:20:42.622797   13103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 12:20:42.622806   13103 command_runner.go:130] > ExecStart=
	I0906 12:20:42.622822   13103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0906 12:20:42.622829   13103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 12:20:42.622836   13103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 12:20:42.622842   13103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 12:20:42.622845   13103 command_runner.go:130] > LimitNOFILE=infinity
	I0906 12:20:42.622850   13103 command_runner.go:130] > LimitNPROC=infinity
	I0906 12:20:42.622853   13103 command_runner.go:130] > LimitCORE=infinity
	I0906 12:20:42.622858   13103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 12:20:42.622862   13103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 12:20:42.622866   13103 command_runner.go:130] > TasksMax=infinity
	I0906 12:20:42.622882   13103 command_runner.go:130] > TimeoutStartSec=0
	I0906 12:20:42.622891   13103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 12:20:42.622895   13103 command_runner.go:130] > Delegate=yes
	I0906 12:20:42.622900   13103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 12:20:42.622904   13103 command_runner.go:130] > KillMode=process
	I0906 12:20:42.622908   13103 command_runner.go:130] > [Install]
	I0906 12:20:42.622922   13103 command_runner.go:130] > WantedBy=multi-user.target
	I0906 12:20:42.623046   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:20:42.635107   13103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:20:42.650265   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:20:42.660442   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:20:42.670557   13103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:20:42.687733   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:20:42.698135   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:20:42.712589   13103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 12:20:42.712891   13103 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:20:42.715807   13103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0906 12:20:42.715876   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:20:42.723104   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:20:42.736529   13103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:20:42.845157   13103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:20:42.954660   13103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:20:42.954733   13103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:20:42.970878   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:43.069021   13103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:20:45.394719   13103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325687442s)
	I0906 12:20:45.394781   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:20:45.405825   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:20:45.415611   13103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:20:45.518550   13103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:20:45.620332   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:45.730400   13103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:20:45.744586   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:20:45.756085   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:45.867521   13103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:20:45.926066   13103 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:20:45.926144   13103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:20:45.930542   13103 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 12:20:45.930554   13103 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 12:20:45.930559   13103 command_runner.go:130] > Device: 0,22	Inode: 771         Links: 1
	I0906 12:20:45.930564   13103 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0906 12:20:45.930568   13103 command_runner.go:130] > Access: 2024-09-06 19:20:46.012191218 +0000
	I0906 12:20:45.930573   13103 command_runner.go:130] > Modify: 2024-09-06 19:20:46.012191218 +0000
	I0906 12:20:45.930577   13103 command_runner.go:130] > Change: 2024-09-06 19:20:46.014191220 +0000
	I0906 12:20:45.930581   13103 command_runner.go:130] >  Birth: -
	I0906 12:20:45.930604   13103 start.go:563] Will wait 60s for crictl version
	I0906 12:20:45.930645   13103 ssh_runner.go:195] Run: which crictl
	I0906 12:20:45.933399   13103 command_runner.go:130] > /usr/bin/crictl
	I0906 12:20:45.933622   13103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:20:45.962193   13103 command_runner.go:130] > Version:  0.1.0
	I0906 12:20:45.962207   13103 command_runner.go:130] > RuntimeName:  docker
	I0906 12:20:45.962210   13103 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0906 12:20:45.962214   13103 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 12:20:45.963280   13103 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:20:45.963347   13103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:20:45.981353   13103 command_runner.go:130] > 27.2.0
	I0906 12:20:45.982262   13103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:20:45.999044   13103 command_runner.go:130] > 27.2.0
	I0906 12:20:46.023107   13103 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:20:46.023157   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:46.023538   13103 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:20:46.028008   13103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:20:46.038612   13103 kubeadm.go:883] updating cluster {Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:20:46.038697   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:20:46.038752   13103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:20:46.051833   13103 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0906 12:20:46.051846   13103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:20:46.051850   13103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:20:46.051855   13103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:20:46.051858   13103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:20:46.051862   13103 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0906 12:20:46.051865   13103 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0906 12:20:46.051871   13103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:20:46.051877   13103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:20:46.051882   13103 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 12:20:46.051948   13103 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:20:46.051957   13103 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:20:46.052037   13103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:20:46.064745   13103 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0906 12:20:46.064758   13103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:20:46.064762   13103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:20:46.064766   13103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:20:46.064769   13103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:20:46.064773   13103 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0906 12:20:46.064776   13103 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0906 12:20:46.064787   13103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:20:46.064792   13103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:20:46.064796   13103 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 12:20:46.065514   13103 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:20:46.065534   13103 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:20:46.065544   13103 kubeadm.go:934] updating node { 192.169.0.33 8443 v1.31.0 docker true true} ...
	I0906 12:20:46.065620   13103 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-459000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:20:46.065684   13103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:20:46.101901   13103 command_runner.go:130] > cgroupfs
	I0906 12:20:46.102506   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:46.102517   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:46.102527   13103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:20:46.102543   13103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.33 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-459000 NodeName:multinode-459000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:20:46.102625   13103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-459000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:20:46.102686   13103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:20:46.111110   13103 command_runner.go:130] > kubeadm
	I0906 12:20:46.111117   13103 command_runner.go:130] > kubectl
	I0906 12:20:46.111120   13103 command_runner.go:130] > kubelet
	I0906 12:20:46.111230   13103 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:20:46.111277   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:20:46.119320   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0906 12:20:46.132438   13103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:20:46.146346   13103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0906 12:20:46.160046   13103 ssh_runner.go:195] Run: grep 192.169.0.33	control-plane.minikube.internal$ /etc/hosts
	I0906 12:20:46.162862   13103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:20:46.172928   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:46.273763   13103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:20:46.288239   13103 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000 for IP: 192.169.0.33
	I0906 12:20:46.288251   13103 certs.go:194] generating shared ca certs ...
	I0906 12:20:46.288261   13103 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:46.288443   13103 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:20:46.288516   13103 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:20:46.288526   13103 certs.go:256] generating profile certs ...
	I0906 12:20:46.288635   13103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.key
	I0906 12:20:46.288722   13103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key.154086e5
	I0906 12:20:46.288789   13103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key
	I0906 12:20:46.288802   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:20:46.288824   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:20:46.288840   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:20:46.288861   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:20:46.288878   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:20:46.288913   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:20:46.288942   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:20:46.288960   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:20:46.289058   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:20:46.289106   13103 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:20:46.289115   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:20:46.289188   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:20:46.289239   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:20:46.289279   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:20:46.289387   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:20:46.289437   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.289463   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.289483   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.289983   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:20:46.323599   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:20:46.349693   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:20:46.380553   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:20:46.405494   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 12:20:46.425404   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:20:46.445154   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:20:46.464970   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 12:20:46.484693   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:20:46.504348   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:20:46.523910   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:20:46.543476   13103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:20:46.556852   13103 ssh_runner.go:195] Run: openssl version
	I0906 12:20:46.560972   13103 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0906 12:20:46.561024   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:20:46.569323   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572714   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572823   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572861   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.576889   13103 command_runner.go:130] > 3ec20f2e
	I0906 12:20:46.577051   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:20:46.585363   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:20:46.593723   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.596951   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.597034   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.597071   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.601216   13103 command_runner.go:130] > b5213941
	I0906 12:20:46.601259   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:20:46.609583   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:20:46.618022   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621405   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621429   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621461   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.625579   13103 command_runner.go:130] > 51391683
	I0906 12:20:46.625701   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:20:46.634117   13103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:20:46.637570   13103 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:20:46.637580   13103 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0906 12:20:46.637586   13103 command_runner.go:130] > Device: 253,1	Inode: 3148599     Links: 1
	I0906 12:20:46.637591   13103 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 12:20:46.637598   13103 command_runner.go:130] > Access: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637604   13103 command_runner.go:130] > Modify: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637608   13103 command_runner.go:130] > Change: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637612   13103 command_runner.go:130] >  Birth: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637725   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:20:46.642003   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.642072   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:20:46.646243   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.646295   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:20:46.650659   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.650716   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:20:46.654983   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.655072   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:20:46.659282   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.659324   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:20:46.663431   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.663587   13103 kubeadm.go:392] StartCluster: {Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provis
ioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:46.663700   13103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:20:46.680120   13103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:20:46.687982   13103 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 12:20:46.687996   13103 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 12:20:46.688003   13103 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 12:20:46.688008   13103 command_runner.go:130] > member
	I0906 12:20:46.688054   13103 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:20:46.688064   13103 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:20:46.688107   13103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:20:46.695454   13103 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:20:46.695768   13103 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-459000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:46.695853   13103 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-459000" cluster setting kubeconfig missing "multinode-459000" context setting]
	I0906 12:20:46.696079   13103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:46.696780   13103 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:46.696975   13103 kapi.go:59] client config for multinode-459000: &rest.Config{Host:"https://192.169.0.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa883ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:20:46.697305   13103 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:20:46.697478   13103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:20:46.704887   13103 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.33
	I0906 12:20:46.704905   13103 kubeadm.go:1160] stopping kube-system containers ...
	I0906 12:20:46.704959   13103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:20:46.723369   13103 command_runner.go:130] > 12b00d3e81cd
	I0906 12:20:46.723381   13103 command_runner.go:130] > b8675b45ba97
	I0906 12:20:46.723384   13103 command_runner.go:130] > 0516c7173c76
	I0906 12:20:46.723387   13103 command_runner.go:130] > 6766a97ec06f
	I0906 12:20:46.723391   13103 command_runner.go:130] > b2cede164434
	I0906 12:20:46.723394   13103 command_runner.go:130] > e4605e60128b
	I0906 12:20:46.723411   13103 command_runner.go:130] > 98079ff18be9
	I0906 12:20:46.723418   13103 command_runner.go:130] > 68811f115b6f
	I0906 12:20:46.723422   13103 command_runner.go:130] > 7158af8be341
	I0906 12:20:46.723426   13103 command_runner.go:130] > fde17951087f
	I0906 12:20:46.723432   13103 command_runner.go:130] > 487be703273e
	I0906 12:20:46.723435   13103 command_runner.go:130] > 95c1a9b114b1
	I0906 12:20:46.723445   13103 command_runner.go:130] > 03508ab110f1
	I0906 12:20:46.723449   13103 command_runner.go:130] > 8b8fefcb9e0b
	I0906 12:20:46.723452   13103 command_runner.go:130] > 6f313c531f3e
	I0906 12:20:46.723455   13103 command_runner.go:130] > 8455632502ed
	I0906 12:20:46.724125   13103 docker.go:483] Stopping containers: [12b00d3e81cd b8675b45ba97 0516c7173c76 6766a97ec06f b2cede164434 e4605e60128b 98079ff18be9 68811f115b6f 7158af8be341 fde17951087f 487be703273e 95c1a9b114b1 03508ab110f1 8b8fefcb9e0b 6f313c531f3e 8455632502ed]
	I0906 12:20:46.724190   13103 ssh_runner.go:195] Run: docker stop 12b00d3e81cd b8675b45ba97 0516c7173c76 6766a97ec06f b2cede164434 e4605e60128b 98079ff18be9 68811f115b6f 7158af8be341 fde17951087f 487be703273e 95c1a9b114b1 03508ab110f1 8b8fefcb9e0b 6f313c531f3e 8455632502ed
	I0906 12:20:46.738443   13103 command_runner.go:130] > 12b00d3e81cd
	I0906 12:20:46.738474   13103 command_runner.go:130] > b8675b45ba97
	I0906 12:20:46.738657   13103 command_runner.go:130] > 0516c7173c76
	I0906 12:20:46.738757   13103 command_runner.go:130] > 6766a97ec06f
	I0906 12:20:46.738837   13103 command_runner.go:130] > b2cede164434
	I0906 12:20:46.738974   13103 command_runner.go:130] > e4605e60128b
	I0906 12:20:46.739000   13103 command_runner.go:130] > 98079ff18be9
	I0906 12:20:46.739061   13103 command_runner.go:130] > 68811f115b6f
	I0906 12:20:46.739156   13103 command_runner.go:130] > 7158af8be341
	I0906 12:20:46.739263   13103 command_runner.go:130] > fde17951087f
	I0906 12:20:46.739379   13103 command_runner.go:130] > 487be703273e
	I0906 12:20:46.739467   13103 command_runner.go:130] > 95c1a9b114b1
	I0906 12:20:46.739588   13103 command_runner.go:130] > 03508ab110f1
	I0906 12:20:46.739640   13103 command_runner.go:130] > 8b8fefcb9e0b
	I0906 12:20:46.739757   13103 command_runner.go:130] > 6f313c531f3e
	I0906 12:20:46.739869   13103 command_runner.go:130] > 8455632502ed
	I0906 12:20:46.740823   13103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:20:46.753311   13103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:20:46.762059   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0906 12:20:46.762071   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0906 12:20:46.762077   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0906 12:20:46.762083   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:20:46.762204   13103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:20:46.762210   13103 kubeadm.go:157] found existing configuration files:
	
	I0906 12:20:46.762252   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 12:20:46.769254   13103 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:20:46.769280   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:20:46.769328   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:20:46.776572   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 12:20:46.783758   13103 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:20:46.783776   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:20:46.783811   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:20:46.791113   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 12:20:46.798161   13103 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:20:46.798183   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:20:46.798220   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:20:46.805713   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 12:20:46.812921   13103 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:20:46.812949   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:20:46.812990   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:20:46.820390   13103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:20:46.827763   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:46.898290   13103 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:20:46.898453   13103 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 12:20:46.898625   13103 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 12:20:46.898765   13103 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:20:46.898960   13103 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 12:20:46.899098   13103 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:20:46.899397   13103 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 12:20:46.899561   13103 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 12:20:46.899681   13103 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:20:46.899817   13103 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:20:46.899989   13103 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:20:46.900143   13103 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 12:20:46.900985   13103 command_runner.go:130] ! W0906 19:20:47.031470    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:46.901004   13103 command_runner.go:130] ! W0906 19:20:47.032174    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:46.901041   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:46.935711   13103 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:20:47.096680   13103 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:20:47.204439   13103 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 12:20:47.365845   13103 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:20:47.451527   13103 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:20:47.525150   13103 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:20:47.527254   13103 command_runner.go:130] ! W0906 19:20:47.069183    1330 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.527272   13103 command_runner.go:130] ! W0906 19:20:47.069676    1330 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.527286   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.576279   13103 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:20:47.581148   13103 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:20:47.581159   13103 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 12:20:47.689821   13103 command_runner.go:130] ! W0906 19:20:47.697610    1335 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.689851   13103 command_runner.go:130] ! W0906 19:20:47.698106    1335 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.689868   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.746190   13103 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:20:47.746600   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:20:47.748596   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:20:47.749246   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:20:47.750702   13103 command_runner.go:130] ! W0906 19:20:47.870242    1362 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.750732   13103 command_runner.go:130] ! W0906 19:20:47.871098    1362 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.750753   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.814153   13103 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:20:47.826523   13103 command_runner.go:130] ! W0906 19:20:47.947508    1370 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.826546   13103 command_runner.go:130] ! W0906 19:20:47.947979    1370 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.826615   13103 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:20:47.826675   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.327215   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.827064   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.840074   13103 command_runner.go:130] > 1692
	I0906 12:20:48.840096   13103 api_server.go:72] duration metric: took 1.013496031s to wait for apiserver process to appear ...
	I0906 12:20:48.840102   13103 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:20:48.840118   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.026473   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:20:51.026490   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:20:51.026497   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.054937   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:20:51.054956   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:20:51.341860   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.346791   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 12:20:51.346809   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 12:20:51.841712   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.847377   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 12:20:51.847398   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 12:20:52.341716   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:52.345528   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:20:52.345592   13103 round_trippers.go:463] GET https://192.169.0.33:8443/version
	I0906 12:20:52.345598   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:52.345606   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:52.345609   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:52.352319   13103 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 12:20:52.352332   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:52.352337   13103 round_trippers.go:580]     Audit-Id: 5ffc807c-a78c-402c-87d3-b9b415b40e5f
	I0906 12:20:52.352340   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:52.352350   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:52.352354   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:52.352356   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:52.352359   13103 round_trippers.go:580]     Content-Length: 263
	I0906 12:20:52.352363   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:52 GMT
	I0906 12:20:52.352382   13103 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 12:20:52.352432   13103 api_server.go:141] control plane version: v1.31.0
	I0906 12:20:52.352443   13103 api_server.go:131] duration metric: took 3.512352698s to wait for apiserver health ...
	I0906 12:20:52.352449   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:52.352452   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:52.374855   13103 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 12:20:52.395566   13103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 12:20:52.402927   13103 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 12:20:52.402941   13103 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0906 12:20:52.402950   13103 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0906 12:20:52.402955   13103 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 12:20:52.402959   13103 command_runner.go:130] > Access: 2024-09-06 19:20:14.852309625 +0000
	I0906 12:20:52.402966   13103 command_runner.go:130] > Modify: 2024-09-03 22:42:55.000000000 +0000
	I0906 12:20:52.402971   13103 command_runner.go:130] > Change: 2024-09-06 19:20:13.268309735 +0000
	I0906 12:20:52.402978   13103 command_runner.go:130] >  Birth: -
	I0906 12:20:52.405546   13103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0906 12:20:52.405555   13103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0906 12:20:52.439971   13103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 12:20:52.805772   13103 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 12:20:52.854248   13103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 12:20:52.933352   13103 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 12:20:53.005604   13103 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 12:20:53.007357   13103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:20:53.007404   13103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:20:53.007414   13103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:20:53.007474   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:20:53.007480   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.007486   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.007490   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.009554   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.009563   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.009569   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.009572   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.009575   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.009579   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.009591   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.009594   13103 round_trippers.go:580]     Audit-Id: 55484294-9cbd-46c9-bee1-1b642c12b69d
	I0906 12:20:53.010487   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"849"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0906 12:20:53.013723   13103 system_pods.go:59] 12 kube-system pods found
	I0906 12:20:53.013738   13103 system_pods.go:61] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:20:53.013744   13103 system_pods.go:61] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 12:20:53.013748   13103 system_pods.go:61] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:20:53.013756   13103 system_pods.go:61] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:20:53.013760   13103 system_pods.go:61] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:20:53.013767   13103 system_pods.go:61] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:20:53.013771   13103 system_pods.go:61] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:20:53.013776   13103 system_pods.go:61] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:20:53.013780   13103 system_pods.go:61] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:20:53.013783   13103 system_pods.go:61] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:20:53.013786   13103 system_pods.go:61] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 12:20:53.013790   13103 system_pods.go:61] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running
	I0906 12:20:53.013794   13103 system_pods.go:74] duration metric: took 6.429185ms to wait for pod list to return data ...
	I0906 12:20:53.013800   13103 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:20:53.013833   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes
	I0906 12:20:53.013837   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.013843   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.013846   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.015478   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.015502   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.015511   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.015514   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.015517   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.015520   13103 round_trippers.go:580]     Audit-Id: 30570eec-545b-4745-8743-a1cab2a3fb29
	I0906 12:20:53.015523   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.015525   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.015644   13103 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"849"},"items":[{"metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14782 chars]
	I0906 12:20:53.016196   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016209   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016218   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016221   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016225   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016229   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016233   13103 node_conditions.go:105] duration metric: took 2.429093ms to run NodePressure ...
	I0906 12:20:53.016243   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:53.160252   13103 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 12:20:53.282226   13103 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 12:20:53.283414   13103 command_runner.go:130] ! W0906 19:20:53.201637    2133 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:53.283436   13103 command_runner.go:130] ! W0906 19:20:53.202191    2133 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:53.283454   13103 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 12:20:53.283521   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0906 12:20:53.283525   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.283530   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.283534   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.285658   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.285667   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.285674   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.285678   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.285683   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.285688   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.285692   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.285695   13103 round_trippers.go:580]     Audit-Id: dfd8d4ba-250d-43fd-a3c9-7094cfa9b329
	I0906 12:20:53.286150   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"820","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31218 chars]
	I0906 12:20:53.286880   13103 kubeadm.go:739] kubelet initialised
	I0906 12:20:53.286889   13103 kubeadm.go:740] duration metric: took 3.428745ms waiting for restarted kubelet to initialise ...
	I0906 12:20:53.286897   13103 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:20:53.286928   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:20:53.286933   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.286939   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.286944   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.289064   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.289072   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.289076   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.289080   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.289082   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.289085   13103 round_trippers.go:580]     Audit-Id: f185bee8-cf54-428e-9251-f89670109af4
	I0906 12:20:53.289088   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.289091   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.290451   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0906 12:20:53.293407   13103 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.293459   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:20:53.293464   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.293470   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.293475   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.295326   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.295335   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.295339   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.295342   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.295345   13103 round_trippers.go:580]     Audit-Id: 83fb4e68-22fb-4080-a855-59e8a5c87034
	I0906 12:20:53.295348   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.295350   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.295353   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.295454   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:20:53.295719   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.295727   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.295733   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.295737   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.297662   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.297677   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.297685   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.297688   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.297691   13103 round_trippers.go:580]     Audit-Id: e5df61ad-c106-47e5-bbc3-4070002c5b9e
	I0906 12:20:53.297694   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.297697   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.297699   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.297927   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.298135   13103 pod_ready.go:98] node "multinode-459000" hosting pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.298146   13103 pod_ready.go:82] duration metric: took 4.727596ms for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.298153   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.298161   13103 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.298194   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-459000
	I0906 12:20:53.298199   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.298205   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.298209   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.299621   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.299629   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.299635   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.299638   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.299642   13103 round_trippers.go:580]     Audit-Id: be77759d-114f-4c80-a5d1-184591aa7427
	I0906 12:20:53.299645   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.299648   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.299650   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.299898   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"820","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6887 chars]
	I0906 12:20:53.300165   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.300172   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.300178   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.300181   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.302558   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.302567   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.302573   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.302576   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.302579   13103 round_trippers.go:580]     Audit-Id: 978a43f1-4d45-4094-ad01-bc549f492e2e
	I0906 12:20:53.302582   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.302586   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.302589   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.302801   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.302977   13103 pod_ready.go:98] node "multinode-459000" hosting pod "etcd-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.302989   13103 pod_ready.go:82] duration metric: took 4.821114ms for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.302995   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "etcd-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.303006   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.303035   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-459000
	I0906 12:20:53.303040   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.303045   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.303049   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.304725   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.304734   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.304739   13103 round_trippers.go:580]     Audit-Id: 744b1630-f218-49d7-bf9e-0874b8ae067c
	I0906 12:20:53.304749   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.304757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.304762   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.304765   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.304768   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.305009   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-459000","namespace":"kube-system","uid":"a7ee0531-75a6-405c-928c-1185a0e5ebd0","resourceVersion":"817","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.33:8443","kubernetes.io/config.hash":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.mirror":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.seen":"2024-09-06T19:16:52.157527221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0906 12:20:53.305246   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.305252   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.305260   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.305264   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.306599   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.306606   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.306611   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.306614   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.306617   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.306621   13103 round_trippers.go:580]     Audit-Id: 9726eabf-d52a-40b0-a363-c7385d06aab6
	I0906 12:20:53.306623   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.306625   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.306860   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.307038   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-apiserver-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.307048   13103 pod_ready.go:82] duration metric: took 4.037219ms for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.307054   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-apiserver-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.307059   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.307089   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-459000
	I0906 12:20:53.307094   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.307099   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.307103   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.308747   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.308756   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.308763   13103 round_trippers.go:580]     Audit-Id: 9a50b907-1158-4251-97c3-8744af1d441b
	I0906 12:20:53.308797   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.308802   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.308806   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.308810   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.308812   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.308934   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-459000","namespace":"kube-system","uid":"ef9a4034-636f-4d52-b328-40aff0e03ccb","resourceVersion":"818","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.mirror":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.seen":"2024-09-06T19:16:52.157528036Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0906 12:20:53.409587   13103 request.go:632] Waited for 100.344038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.409636   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.409642   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.409649   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.409678   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.411918   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.411930   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.411938   13103 round_trippers.go:580]     Audit-Id: 36f8d9a0-08c1-4900-a883-c98118ddb954
	I0906 12:20:53.411943   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.411948   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.411951   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.411976   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.411984   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.412084   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.412281   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-controller-manager-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.412293   13103 pod_ready.go:82] duration metric: took 105.228203ms for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.412300   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-controller-manager-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.412305   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.609009   13103 request.go:632] Waited for 196.662551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:20:53.609093   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:20:53.609102   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.609109   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.609117   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.610900   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.610911   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.610918   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.610924   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.610933   13103 round_trippers.go:580]     Audit-Id: 8f5d6aad-4ab1-48b5-889e-18c35f8c2f26
	I0906 12:20:53.610936   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.610940   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.610944   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.611070   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crzpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"253c78d8-0d56-49e8-a00c-99218c50beac","resourceVersion":"505","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:20:53.809000   13103 request.go:632] Waited for 197.654908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:20:53.809067   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:20:53.809076   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.809084   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.809090   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.810657   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.810685   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.810691   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.810694   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.810697   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.810700   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.810704   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.810706   13103 round_trippers.go:580]     Audit-Id: 5e585264-4859-4285-aeec-7287183c8596
	I0906 12:20:53.810806   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m02","uid":"42483c05-2f0a-48b5-a783-4c5958284f86","resourceVersion":"573","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_17_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3818 chars]
	I0906 12:20:53.810982   13103 pod_ready.go:93] pod "kube-proxy-crzpl" in "kube-system" namespace has status "Ready":"True"
	I0906 12:20:53.810990   13103 pod_ready.go:82] duration metric: took 398.681997ms for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.810997   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.009014   13103 request.go:632] Waited for 197.982629ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:20:54.009087   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:20:54.009094   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.009120   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.009127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.010937   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.010949   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.010956   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.010962   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.010969   13103 round_trippers.go:580]     Audit-Id: 16e1e167-aa04-4560-aac6-3565f9b98f3d
	I0906 12:20:54.010975   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.010978   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.010980   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.011063   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t24bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"626397be-3b5a-4dd4-8932-283e8edb0d27","resourceVersion":"849","creationTimestamp":"2024-09-06T19:16:56Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0906 12:20:54.209036   13103 request.go:632] Waited for 197.706507ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:54.209076   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:54.209082   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.209116   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.209123   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.210677   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.210689   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.210699   13103 round_trippers.go:580]     Audit-Id: 84f2f37b-1511-4669-aa06-cc83e829c4c3
	I0906 12:20:54.210707   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.210716   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.210722   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.210730   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.210734   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.210986   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:54.211173   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-proxy-t24bs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:54.211182   13103 pod_ready.go:82] duration metric: took 400.183556ms for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:54.211191   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-proxy-t24bs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:54.211199   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.409019   13103 request.go:632] Waited for 197.785012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:20:54.409077   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:20:54.409083   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.409089   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.409093   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.410708   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.410718   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.410723   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.410726   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.410729   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.410733   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.410735   13103 round_trippers.go:580]     Audit-Id: 9bdf799c-01ec-497a-9877-acc5ee1c1400
	I0906 12:20:54.410738   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.410823   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vqcpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6","resourceVersion":"735","creationTimestamp":"2024-09-06T19:18:30Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:20:54.607514   13103 request.go:632] Waited for 196.41514ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:20:54.607567   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:20:54.607574   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.607581   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.607587   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.609573   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.609582   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.609587   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.609598   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.609601   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.609604   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.609606   13103 round_trippers.go:580]     Audit-Id: bc2698b5-26ee-4b75-8329-688459bdcba8
	I0906 12:20:54.609613   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.609723   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m03","uid":"6c54d256-cf96-4ec0-9d0b-36c85c77ef2b","resourceVersion":"760","creationTimestamp":"2024-09-06T19:19:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_19_25_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:19:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3635 chars]
	I0906 12:20:54.609895   13103 pod_ready.go:93] pod "kube-proxy-vqcpj" in "kube-system" namespace has status "Ready":"True"
	I0906 12:20:54.609903   13103 pod_ready.go:82] duration metric: took 398.702285ms for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.609909   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.809054   13103 request.go:632] Waited for 199.102039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:20:54.809116   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:20:54.809123   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.809130   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.809135   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.811199   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:54.811208   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.811213   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.811217   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.811220   13103 round_trippers.go:580]     Audit-Id: 14d96cdd-752b-4f32-81b5-946d2a4fb9c9
	I0906 12:20:54.811222   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.811226   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.811232   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.811498   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-459000","namespace":"kube-system","uid":"4602221a-c2e8-4f7d-a31e-2910196cb32b","resourceVersion":"819","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.mirror":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.seen":"2024-09-06T19:16:46.929338017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0906 12:20:55.009522   13103 request.go:632] Waited for 197.762294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.009571   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.009578   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.009584   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.009588   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.011014   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:55.011021   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.011025   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.011031   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.011033   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.011038   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.011041   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.011044   13103 round_trippers.go:580]     Audit-Id: 74dc264e-7739-4a48-972c-506fbb05ade8
	I0906 12:20:55.011131   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:55.011329   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-scheduler-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:55.011339   13103 pod_ready.go:82] duration metric: took 401.42623ms for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:55.011345   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-scheduler-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:55.011353   13103 pod_ready.go:39] duration metric: took 1.724456804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:20:55.011367   13103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:20:55.022277   13103 command_runner.go:130] > -16
	I0906 12:20:55.022455   13103 ops.go:34] apiserver oom_adj: -16
	I0906 12:20:55.022461   13103 kubeadm.go:597] duration metric: took 8.334425046s to restartPrimaryControlPlane
	I0906 12:20:55.022467   13103 kubeadm.go:394] duration metric: took 8.358925932s to StartCluster
	I0906 12:20:55.022482   13103 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:55.022574   13103 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:55.022988   13103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:55.023242   13103 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:20:55.023269   13103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:20:55.023397   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:55.046055   13103 out.go:177] * Verifying Kubernetes components...
	I0906 12:20:55.088345   13103 out.go:177] * Enabled addons: 
	I0906 12:20:55.109104   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:55.130229   13103 addons.go:510] duration metric: took 106.968501ms for enable addons: enabled=[]
	I0906 12:20:55.271679   13103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:20:55.282375   13103 node_ready.go:35] waiting up to 6m0s for node "multinode-459000" to be "Ready" ...
	I0906 12:20:55.282438   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.282444   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.282450   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.282453   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.283922   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:55.283933   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.283939   13103 round_trippers.go:580]     Audit-Id: e487ae5e-005a-48d5-b58f-3d58f014af16
	I0906 12:20:55.283945   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.283948   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.283952   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.283955   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.283957   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.284279   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:55.784190   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.784216   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.784227   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.784232   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.787081   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:55.787097   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.787104   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.787116   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.787120   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.787124   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.787128   13103 round_trippers.go:580]     Audit-Id: 613bdd38-a63c-46c4-ad1d-e23b6b4ead50
	I0906 12:20:55.787132   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.787225   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:56.283042   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:56.283069   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:56.283081   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:56.283086   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:56.285909   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:56.285924   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:56.285931   13103 round_trippers.go:580]     Audit-Id: 336e04a7-5b77-468d-a980-45a2482d9f8c
	I0906 12:20:56.285935   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:56.285938   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:56.285942   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:56.285946   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:56.285949   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:56 GMT
	I0906 12:20:56.286021   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:56.783350   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:56.783375   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:56.783387   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:56.783394   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:56.786321   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:56.786335   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:56.786342   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:56 GMT
	I0906 12:20:56.786346   13103 round_trippers.go:580]     Audit-Id: 4d25f0ac-c3f5-4f16-98b8-45432f07e35c
	I0906 12:20:56.786350   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:56.786354   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:56.786358   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:56.786361   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:56.786856   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:57.282948   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:57.282975   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:57.282986   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:57.282992   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:57.285671   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:57.285684   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:57.285691   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:57 GMT
	I0906 12:20:57.285695   13103 round_trippers.go:580]     Audit-Id: e05176ca-d4e5-4302-8520-49057bbbad74
	I0906 12:20:57.285699   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:57.285703   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:57.285720   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:57.285733   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:57.285862   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:57.286129   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:20:57.784635   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:57.784663   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:57.784701   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:57.784710   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:57.787321   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:57.787336   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:57.787343   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:57.787348   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:57 GMT
	I0906 12:20:57.787353   13103 round_trippers.go:580]     Audit-Id: 22a92049-2ac6-4f14-a36b-43fdd32ce11f
	I0906 12:20:57.787357   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:57.787363   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:57.787366   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:57.787656   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:58.282909   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:58.282936   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:58.282951   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:58.282957   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:58.285758   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:58.285775   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:58.285783   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:58 GMT
	I0906 12:20:58.285789   13103 round_trippers.go:580]     Audit-Id: d296704e-2819-42cc-ba15-d8774b071678
	I0906 12:20:58.285795   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:58.285801   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:58.285806   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:58.285811   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:58.285911   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:58.782638   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:58.782660   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:58.782696   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:58.782704   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:58.784836   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:58.784849   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:58.784856   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:58 GMT
	I0906 12:20:58.784862   13103 round_trippers.go:580]     Audit-Id: 7471c8ca-95e1-4e27-b818-6a3ee6a94f84
	I0906 12:20:58.784867   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:58.784873   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:58.784875   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:58.784878   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:58.784952   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.283284   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:59.283306   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:59.283315   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:59.283324   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:59.285640   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:59.285651   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:59.285657   13103 round_trippers.go:580]     Audit-Id: b32b984e-803b-45c6-a485-2f6621da8200
	I0906 12:20:59.285659   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:59.285663   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:59.285665   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:59.285669   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:59.285672   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:59 GMT
	I0906 12:20:59.285774   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.783737   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:59.783761   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:59.783773   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:59.783780   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:59.786325   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:59.786343   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:59.786351   13103 round_trippers.go:580]     Audit-Id: 9216e10e-3b70-4a91-9a52-a8a339880eb8
	I0906 12:20:59.786357   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:59.786360   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:59.786364   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:59.786367   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:59.786374   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:59 GMT
	I0906 12:20:59.786769   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.787020   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:00.283378   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:00.283465   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:00.283478   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:00.283485   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:00.285634   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:00.285646   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:00.285651   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:00.285654   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:00.285661   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:00.285663   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:00.285683   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:00 GMT
	I0906 12:21:00.285688   13103 round_trippers.go:580]     Audit-Id: b74fdec6-ab72-46ec-970e-11133a30eb49
	I0906 12:21:00.285749   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:00.782855   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:00.782871   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:00.782877   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:00.782880   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:00.785063   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:00.785077   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:00.785083   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:00 GMT
	I0906 12:21:00.785086   13103 round_trippers.go:580]     Audit-Id: b8b54770-654c-47ac-bb70-f47239d9a85f
	I0906 12:21:00.785090   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:00.785094   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:00.785097   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:00.785100   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:00.785269   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.283867   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:01.283894   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:01.283904   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:01.283910   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:01.286375   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:01.286388   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:01.286397   13103 round_trippers.go:580]     Audit-Id: a8e07055-17c9-44ef-a99d-9029a0fff2ce
	I0906 12:21:01.286401   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:01.286433   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:01.286441   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:01.286445   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:01.286450   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:01 GMT
	I0906 12:21:01.286643   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.784066   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:01.784089   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:01.784101   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:01.784110   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:01.786790   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:01.786802   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:01.786808   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:01.786810   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:01 GMT
	I0906 12:21:01.786818   13103 round_trippers.go:580]     Audit-Id: 709cfb3e-a937-4f70-b01f-a375a7ecd6d2
	I0906 12:21:01.786822   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:01.786824   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:01.786827   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:01.787030   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.787224   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:02.283110   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:02.283218   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:02.283234   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:02.283241   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:02.285929   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:02.285942   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:02.285947   13103 round_trippers.go:580]     Audit-Id: 23e67746-8645-42b6-b246-9ea7bad09da7
	I0906 12:21:02.285950   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:02.285952   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:02.285954   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:02.285957   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:02.285980   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:02 GMT
	I0906 12:21:02.286063   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:02.784562   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:02.784589   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:02.784601   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:02.784607   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:02.787179   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:02.787191   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:02.787196   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:02.787199   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:02 GMT
	I0906 12:21:02.787202   13103 round_trippers.go:580]     Audit-Id: 3138c6d4-06dc-4784-ad86-3d2bf39d9d18
	I0906 12:21:02.787204   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:02.787207   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:02.787210   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:02.787360   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:03.282839   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:03.282867   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:03.282879   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:03.282887   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:03.285832   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:03.285850   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:03.285857   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:03.285865   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:03.285869   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:03.285874   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:03 GMT
	I0906 12:21:03.285878   13103 round_trippers.go:580]     Audit-Id: c405913f-9342-44dc-931f-f8414fcdd19e
	I0906 12:21:03.285882   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:03.285942   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:03.782685   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:03.782706   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:03.782716   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:03.782721   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:03.785444   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:03.785456   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:03.785462   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:03.785465   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:03.785468   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:03.785471   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:03 GMT
	I0906 12:21:03.785473   13103 round_trippers.go:580]     Audit-Id: 5a0d7dbe-5224-44ee-a0df-2ba863732ca1
	I0906 12:21:03.785477   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:03.785734   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:04.282619   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:04.282642   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:04.282654   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:04.282662   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:04.285440   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:04.285454   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:04.285462   13103 round_trippers.go:580]     Audit-Id: 7fa3551b-6c18-4c05-a1f9-feedce2df755
	I0906 12:21:04.285465   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:04.285468   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:04.285472   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:04.285476   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:04.285479   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:04 GMT
	I0906 12:21:04.285554   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:04.285813   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:04.783450   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:04.783471   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:04.783483   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:04.783492   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:04.786538   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:04.786553   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:04.786566   13103 round_trippers.go:580]     Audit-Id: e6e70310-56ea-4d9b-9dfb-50f1853d1c43
	I0906 12:21:04.786572   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:04.786578   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:04.786582   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:04.786587   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:04.786592   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:04 GMT
	I0906 12:21:04.786801   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:05.282653   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:05.282671   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:05.282680   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:05.282687   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:05.285490   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:05.285505   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:05.285512   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:05.285517   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:05.285521   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:05.285526   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:05.285530   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:05 GMT
	I0906 12:21:05.285534   13103 round_trippers.go:580]     Audit-Id: 78254ef8-b353-4eda-8274-53fea1e71827
	I0906 12:21:05.285829   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:05.783384   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:05.783407   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:05.783417   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:05.783422   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:05.786324   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:05.786338   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:05.786346   13103 round_trippers.go:580]     Audit-Id: 7cadcf56-0277-4ba8-b4c6-6b99b793cc5a
	I0906 12:21:05.786350   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:05.786353   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:05.786358   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:05.786362   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:05.786367   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:05 GMT
	I0906 12:21:05.786462   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:06.283633   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:06.283652   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:06.283660   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:06.283665   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:06.286164   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:06.286176   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:06.286181   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:06.286184   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:06 GMT
	I0906 12:21:06.286193   13103 round_trippers.go:580]     Audit-Id: 07623995-247a-4533-b371-d74f13933cf9
	I0906 12:21:06.286197   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:06.286200   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:06.286203   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:06.286261   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:06.286456   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:06.784394   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:06.784416   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:06.784425   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:06.784430   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:06.786848   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:06.786862   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:06.786874   13103 round_trippers.go:580]     Audit-Id: ebdeab18-9907-4e9c-b0af-049ddea0dffa
	I0906 12:21:06.786882   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:06.786890   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:06.786898   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:06.786905   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:06.786910   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:06 GMT
	I0906 12:21:06.787176   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:07.283258   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:07.283286   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:07.283298   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:07.283303   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:07.286283   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:07.286298   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:07.286304   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:07 GMT
	I0906 12:21:07.286309   13103 round_trippers.go:580]     Audit-Id: 198ec8db-095b-4749-936f-50fdaebba154
	I0906 12:21:07.286313   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:07.286318   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:07.286322   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:07.286325   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:07.286389   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:07.782722   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:07.782750   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:07.782762   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:07.782772   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:07.787689   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:07.787701   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:07.787706   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:07.787709   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:07 GMT
	I0906 12:21:07.787712   13103 round_trippers.go:580]     Audit-Id: 43ab80c6-cadf-474a-a628-290349ba4713
	I0906 12:21:07.787733   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:07.787739   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:07.787742   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:07.788169   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:08.284167   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:08.284228   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:08.284239   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:08.284244   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:08.286632   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:08.286645   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:08.286651   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:08.286655   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:08.286658   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:08.286661   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:08 GMT
	I0906 12:21:08.286664   13103 round_trippers.go:580]     Audit-Id: 33efcd67-9d4d-4b18-9e85-046bf5c121f5
	I0906 12:21:08.286666   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:08.286715   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:08.286913   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:08.782461   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:08.782499   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:08.782508   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:08.782513   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:08.784535   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:08.784551   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:08.784563   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:08 GMT
	I0906 12:21:08.784568   13103 round_trippers.go:580]     Audit-Id: 52e2b62a-8c0c-4d1c-8f26-db12cc5752c5
	I0906 12:21:08.784571   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:08.784574   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:08.784578   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:08.784581   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:08.784653   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:09.283882   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:09.283906   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:09.283917   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:09.283925   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:09.286310   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:09.286322   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:09.286330   13103 round_trippers.go:580]     Audit-Id: f8e97681-9a81-4185-96fe-451b96e23c20
	I0906 12:21:09.286333   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:09.286337   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:09.286340   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:09.286364   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:09.286375   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:09 GMT
	I0906 12:21:09.286444   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:09.784111   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:09.784128   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:09.784136   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:09.784153   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:09.785974   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:09.785984   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:09.785994   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:09.786000   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:09.786004   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:09.786008   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:09 GMT
	I0906 12:21:09.786012   13103 round_trippers.go:580]     Audit-Id: 95b4ab98-eaf7-47a9-93be-d4364de7462c
	I0906 12:21:09.786014   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:09.786405   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:10.283812   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:10.283844   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:10.283887   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:10.283896   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:10.286832   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:10.286844   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:10.286850   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:10.286854   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:10 GMT
	I0906 12:21:10.286859   13103 round_trippers.go:580]     Audit-Id: c137c5aa-ce53-40a1-8e3f-d5c95e35f70b
	I0906 12:21:10.286863   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:10.286868   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:10.286872   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:10.287129   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:10.287326   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:10.782721   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:10.782741   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:10.782751   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:10.782764   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:10.785658   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:10.785670   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:10.785687   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:10.785692   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:10.785696   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:10.785700   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:10 GMT
	I0906 12:21:10.785702   13103 round_trippers.go:580]     Audit-Id: 26d17310-f8a6-4ca9-96e2-32b23e99741c
	I0906 12:21:10.785705   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:10.785807   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:11.284555   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.284583   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.284595   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.284603   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.287262   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:11.287276   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.287283   13103 round_trippers.go:580]     Audit-Id: 298419bc-1b89-4009-b333-f9ebaaac792a
	I0906 12:21:11.287287   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.287291   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.287295   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.287299   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.287303   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.287426   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:11.782776   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.782797   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.782827   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.782834   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.785262   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:11.785274   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.785280   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.785283   13103 round_trippers.go:580]     Audit-Id: 51538514-8dce-4de4-82af-a290dfaf42ba
	I0906 12:21:11.785286   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.785309   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.785316   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.785319   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.785398   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:11.785587   13103 node_ready.go:49] node "multinode-459000" has status "Ready":"True"
	I0906 12:21:11.785600   13103 node_ready.go:38] duration metric: took 16.50328117s for node "multinode-459000" to be "Ready" ...
	I0906 12:21:11.785607   13103 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:21:11.785647   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:11.785653   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.785658   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.785663   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.787289   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:11.787313   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.787322   13103 round_trippers.go:580]     Audit-Id: 2eb240ae-11a6-4539-b244-1f271eb9eb36
	I0906 12:21:11.787326   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.787330   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.787332   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.787336   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.787338   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.787991   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"908"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88963 chars]
	I0906 12:21:11.789896   13103 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:11.789934   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:11.789939   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.789945   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.789949   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.791666   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:11.791678   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.791685   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.791691   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.791694   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.791699   13103 round_trippers.go:580]     Audit-Id: e2e9113d-9ff2-4043-9551-32ea69ce30f1
	I0906 12:21:11.791703   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.791706   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.791821   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:11.792083   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.792090   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.792095   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.792099   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.793082   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:11.793091   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.793099   13103 round_trippers.go:580]     Audit-Id: af531fa0-d516-472c-b40f-a602285a709a
	I0906 12:21:11.793105   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.793110   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.793116   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.793121   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.793126   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.793281   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:12.290348   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:12.290372   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.290383   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.290392   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.292907   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:12.292922   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.292929   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.292933   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.292938   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.292941   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.292946   13103 round_trippers.go:580]     Audit-Id: edba7332-6fb8-4802-9aee-2c3c9563ae9c
	I0906 12:21:12.292949   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.293171   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:12.293453   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:12.293460   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.293465   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.293468   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.294506   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.294516   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.294522   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.294527   13103 round_trippers.go:580]     Audit-Id: 15e49219-17f5-4b87-8dc8-8dd484c4cd61
	I0906 12:21:12.294532   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.294537   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.294540   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.294543   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.294732   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:12.790491   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:12.790508   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.790518   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.790523   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.792321   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.792329   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.792334   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.792338   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.792340   13103 round_trippers.go:580]     Audit-Id: 12dc6b9d-7810-4c86-9fc1-81575bbae058
	I0906 12:21:12.792343   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.792346   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.792349   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.792438   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:12.792725   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:12.792732   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.792738   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.792743   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.794016   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.794023   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.794028   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.794031   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.794034   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.794036   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.794039   13103 round_trippers.go:580]     Audit-Id: cf5f613a-ccd9-4db7-9429-f36a136edcb0
	I0906 12:21:12.794043   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.794107   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.290999   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:13.291027   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.291039   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.291046   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.294091   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:13.294107   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.294114   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.294119   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.294122   13103 round_trippers.go:580]     Audit-Id: 2914cdba-2b31-4706-b8b6-9fc62d2eb6f8
	I0906 12:21:13.294127   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.294131   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.294136   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.294376   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:13.294791   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:13.294801   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.294809   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.294813   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.296177   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:13.296187   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.296190   13103 round_trippers.go:580]     Audit-Id: c8739fcf-eef5-458d-95ca-2d0ad6c03ca4
	I0906 12:21:13.296194   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.296198   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.296202   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.296206   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.296210   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.296360   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.791389   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:13.791416   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.791428   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.791436   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.794504   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:13.794524   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.794531   13103 round_trippers.go:580]     Audit-Id: 1e0fc598-7606-4704-947f-eff0dfcd612d
	I0906 12:21:13.794536   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.794555   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.794563   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.794567   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.794574   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.794767   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:13.795159   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:13.795169   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.795177   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.795181   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.796593   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:13.796602   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.796607   13103 round_trippers.go:580]     Audit-Id: f2819312-997f-4644-981a-c9a96a4b81c4
	I0906 12:21:13.796611   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.796613   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.796616   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.796618   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.796621   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.796684   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.796852   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:14.290091   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:14.290107   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.290116   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.290121   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.292386   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:14.292398   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.292404   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.292408   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.292420   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.292423   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.292426   13103 round_trippers.go:580]     Audit-Id: b216591b-36b4-4ea5-8115-7316edee1389
	I0906 12:21:14.292429   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.292507   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:14.292791   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:14.292798   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.292803   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.292807   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.293808   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:14.293817   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.293824   13103 round_trippers.go:580]     Audit-Id: 3d892497-eaef-4670-a14b-7ad0fc9e3ba4
	I0906 12:21:14.293829   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.293833   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.293836   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.293839   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.293841   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.294121   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:14.790294   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:14.790336   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.790350   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.790372   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.792990   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:14.793003   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.793011   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.793014   13103 round_trippers.go:580]     Audit-Id: 18d2bb7c-6a82-4cb6-83fb-3ff3f0702de1
	I0906 12:21:14.793018   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.793020   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.793023   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.793057   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.793204   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:14.793496   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:14.793503   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.793509   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.793512   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.794616   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:14.794624   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.794628   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.794632   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.794635   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.794637   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.794640   13103 round_trippers.go:580]     Audit-Id: fc4d182e-61b8-4501-ac19-a68778dfcb78
	I0906 12:21:14.794643   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.794780   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:15.290106   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:15.290127   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.290135   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.290140   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.292663   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:15.292675   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.292680   13103 round_trippers.go:580]     Audit-Id: 103f5efe-9afc-4fd2-a664-63ec6be292a5
	I0906 12:21:15.292683   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.292687   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.292690   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.292692   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.292695   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.292764   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:15.293046   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:15.293053   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.293058   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.293062   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.294226   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:15.294245   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.294254   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.294258   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.294261   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.294264   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.294266   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.294268   13103 round_trippers.go:580]     Audit-Id: eee0a50e-ed8d-4c10-b2cf-e8e447bb8f85
	I0906 12:21:15.294325   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:15.790866   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:15.790888   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.790898   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.790904   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.793667   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:15.793683   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.793689   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.793693   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.793699   13103 round_trippers.go:580]     Audit-Id: 4a951143-6879-4262-b124-530ae44f12b6
	I0906 12:21:15.793703   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.793706   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.793725   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.793900   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:15.794275   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:15.794286   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.794293   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.794297   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.795734   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:15.795744   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.795749   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.795754   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.795758   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.795762   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.795765   13103 round_trippers.go:580]     Audit-Id: 1d3366c9-abcd-444b-901f-cd8c59b24b0b
	I0906 12:21:15.795767   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.795821   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:16.290256   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:16.290275   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.290284   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.290290   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.292642   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:16.292655   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.292660   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.292663   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.292667   13103 round_trippers.go:580]     Audit-Id: d728eba4-78e6-490d-9876-de40ab3d2504
	I0906 12:21:16.292670   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.292674   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.292677   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.292961   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:16.293244   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:16.293252   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.293257   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.293261   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.294276   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:16.294285   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.294290   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.294294   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.294297   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.294300   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.294303   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.294306   13103 round_trippers.go:580]     Audit-Id: cae7b7ec-4833-4094-b8df-dbf19c7d37d2
	I0906 12:21:16.294558   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:16.294728   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:16.791363   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:16.791390   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.791402   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.791408   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.794048   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:16.794060   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.794065   13103 round_trippers.go:580]     Audit-Id: f59eb22e-fd55-4628-b75d-05898d911e96
	I0906 12:21:16.794069   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.794071   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.794075   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.794077   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.794081   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.794151   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:16.794458   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:16.794465   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.794470   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.794474   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.795665   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:16.795672   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.795678   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.795681   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.795685   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.795687   13103 round_trippers.go:580]     Audit-Id: ec472ec1-1223-49bc-8f4f-91e810fc4307
	I0906 12:21:16.795690   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.795693   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.795800   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:17.289973   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:17.289991   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.289997   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.290000   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.291730   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.291752   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.291766   13103 round_trippers.go:580]     Audit-Id: c1fa4522-de4c-4930-9edc-e416768ea52d
	I0906 12:21:17.291786   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.291792   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.291795   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.291798   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.291802   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.291902   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:17.292221   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:17.292228   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.292234   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.292237   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.293364   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.293375   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.293382   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.293386   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.293390   13103 round_trippers.go:580]     Audit-Id: 11f2e1d3-2e85-474d-9b62-a390693faa18
	I0906 12:21:17.293393   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.293395   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.293398   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.293453   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:17.790169   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:17.790185   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.790190   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.790193   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.791789   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.791801   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.791808   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.791814   13103 round_trippers.go:580]     Audit-Id: 9ac97b11-a198-4efb-8efc-0d2cca12e1db
	I0906 12:21:17.791821   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.791827   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.791833   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.791838   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.792162   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:17.792474   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:17.792481   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.792487   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.792492   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.793759   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.793771   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.793778   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.793783   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.793788   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.793792   13103 round_trippers.go:580]     Audit-Id: 3b1483dc-be8e-438f-bf9b-c9aa98fde328
	I0906 12:21:17.793796   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.793800   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.793931   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:18.290365   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:18.290394   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.290406   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.290472   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.292778   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:18.292793   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.292798   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.292802   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.292804   13103 round_trippers.go:580]     Audit-Id: 89948fe5-93dd-4262-9047-3782b382d578
	I0906 12:21:18.292807   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.292809   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.292811   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.292878   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:18.293174   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:18.293181   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.293186   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.293189   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.294291   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:18.294299   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.294311   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.294316   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.294318   13103 round_trippers.go:580]     Audit-Id: 7863f14e-37f8-425c-bace-a4f1fd6c881a
	I0906 12:21:18.294322   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.294325   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.294328   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.294974   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:18.295151   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:18.790221   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:18.790246   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.790258   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.790286   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.792634   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:18.792650   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.792658   13103 round_trippers.go:580]     Audit-Id: dff2e427-1f05-4c64-9b5f-a6b13eadb645
	I0906 12:21:18.792662   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.792666   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.792670   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.792676   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.792679   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.792781   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:18.793116   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:18.793145   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.793152   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.793170   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.794612   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:18.794620   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.794625   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.794628   13103 round_trippers.go:580]     Audit-Id: 1a6da25b-f653-4360-b94d-81192052ff13
	I0906 12:21:18.794632   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.794635   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.794640   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.794643   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.794851   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:19.290301   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:19.290337   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.290346   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.290351   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.292530   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:19.292543   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.292548   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.292551   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.292554   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.292558   13103 round_trippers.go:580]     Audit-Id: 2f2fec6e-8368-4ebc-b6ab-4ad12cbf992b
	I0906 12:21:19.292561   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.292564   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.292633   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:19.292932   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:19.292939   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.292944   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.292948   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.294059   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:19.294068   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.294073   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.294077   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.294080   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.294082   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.294085   13103 round_trippers.go:580]     Audit-Id: ce182c3f-f63b-477f-9d1f-903a0e58563f
	I0906 12:21:19.294088   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.294240   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:19.792069   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:19.792088   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.792096   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.792100   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.794256   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:19.794269   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.794274   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.794278   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.794280   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.794282   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.794285   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.794287   13103 round_trippers.go:580]     Audit-Id: 004f896c-9063-4725-b97a-f4adea5fb1c5
	I0906 12:21:19.794494   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:19.794780   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:19.794787   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.794793   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.794796   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.798926   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:19.798937   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.798941   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.798944   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.798947   13103 round_trippers.go:580]     Audit-Id: 4c388aad-97a1-4855-90d9-b470c8d951ee
	I0906 12:21:19.798949   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.798951   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.798954   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.799557   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:20.290622   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:20.290645   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.290657   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.290663   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.293197   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:20.293213   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.293220   13103 round_trippers.go:580]     Audit-Id: fab7e316-5b57-43ad-81ee-16e332f18312
	I0906 12:21:20.293224   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.293228   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.293231   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.293236   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.293241   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.293329   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:20.293699   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:20.293708   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.293716   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.293723   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.294963   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:20.294973   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.294978   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.294985   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.294991   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.294995   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.295000   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.295003   13103 round_trippers.go:580]     Audit-Id: 772e3113-a0aa-49f0-90ea-85d876fbe1f2
	I0906 12:21:20.295067   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:20.295232   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:20.792125   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:20.792146   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.792158   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.792167   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.795233   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:20.795252   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.795260   13103 round_trippers.go:580]     Audit-Id: 615780de-f810-4f27-a16c-ab7c2e73713e
	I0906 12:21:20.795264   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.795269   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.795272   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.795276   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.795280   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.795665   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:20.796042   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:20.796052   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.796060   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.796081   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.797582   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:20.797591   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.797597   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.797601   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.797605   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.797608   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.797612   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.797617   13103 round_trippers.go:580]     Audit-Id: 91eba761-d083-4d31-84b5-7de10ea4f1fa
	I0906 12:21:20.798027   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:21.292039   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:21.292067   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.292079   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.292085   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.295020   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:21.295040   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.295051   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.295059   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.295074   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.295080   13103 round_trippers.go:580]     Audit-Id: da3b9516-e0b0-4030-8bb5-01eecf8f60f0
	I0906 12:21:21.295085   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.295090   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.295205   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:21.295588   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:21.295599   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.295606   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.295610   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.296951   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:21.296959   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.296964   13103 round_trippers.go:580]     Audit-Id: 4f65491b-a85f-457f-b4a6-9957d11b1b92
	I0906 12:21:21.296980   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.296987   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.296990   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.296992   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.296995   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.297061   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:21.790165   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:21.790183   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.790212   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.790223   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.792474   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:21.792486   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.792491   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.792495   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.792498   13103 round_trippers.go:580]     Audit-Id: e76b82d6-7a4c-491d-a0e1-ec55533b249e
	I0906 12:21:21.792501   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.792504   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.792507   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.792692   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:21.792977   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:21.792984   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.792989   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.792993   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.794028   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:21.794035   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.794040   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.794043   13103 round_trippers.go:580]     Audit-Id: a5239146-5aef-41b1-a558-92ec46d1ec96
	I0906 12:21:21.794046   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.794050   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.794053   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.794056   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.794392   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:22.291077   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:22.291095   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.291103   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.291109   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.293678   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:22.293690   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.293698   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.293702   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.293706   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.293712   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.293715   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.293720   13103 round_trippers.go:580]     Audit-Id: 830b9445-9e92-4e00-a756-44b08fd5b00f
	I0906 12:21:22.293841   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:22.294140   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:22.294148   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.294154   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.294157   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.295227   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:22.295237   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.295242   13103 round_trippers.go:580]     Audit-Id: d1cd6776-0a0d-4d08-a619-0d9c0f5c6498
	I0906 12:21:22.295259   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.295265   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.295268   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.295271   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.295275   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.295424   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:22.295600   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:22.792126   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:22.792148   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.792160   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.792166   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.795102   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:22.795118   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.795125   13103 round_trippers.go:580]     Audit-Id: 0d5e174a-14c0-414f-bd28-e23766377584
	I0906 12:21:22.795129   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.795132   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.795138   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.795144   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.795150   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.795330   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:22.795708   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:22.795718   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.795726   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.795731   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.796977   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:22.796984   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.796990   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.796995   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.796999   13103 round_trippers.go:580]     Audit-Id: c85d75ab-f62a-4f5f-b60d-c6982eb9e60b
	I0906 12:21:22.797002   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.797006   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.797009   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.797298   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:23.292087   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:23.292107   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.292119   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.292127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.294736   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:23.294752   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.294762   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.294768   13103 round_trippers.go:580]     Audit-Id: 844e43cd-1b1e-41c0-937a-9274b6eb3fb9
	I0906 12:21:23.294773   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.294778   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.294785   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.294790   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.295083   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:23.295380   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:23.295388   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.295394   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.295398   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.296737   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:23.296745   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.296749   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.296753   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.296756   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.296759   13103 round_trippers.go:580]     Audit-Id: e7b039bb-92fe-488d-81bd-ffa5a26d96a7
	I0906 12:21:23.296761   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.296764   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.296946   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:23.792162   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:23.792185   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.792197   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.792203   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.796761   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:23.796773   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.796778   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.796782   13103 round_trippers.go:580]     Audit-Id: 8a0d37c4-d5d4-4b34-a3f8-8ed244e5d4fd
	I0906 12:21:23.796785   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.796788   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.796791   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.796793   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.796925   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:23.797224   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:23.797232   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.797238   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.797242   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.799226   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:23.799235   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.799242   13103 round_trippers.go:580]     Audit-Id: 242f9ee1-d98d-4280-808b-a656a2b92498
	I0906 12:21:23.799247   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.799251   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.799255   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.799259   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.799269   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.799414   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.290082   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:24.290096   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.290102   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.290105   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.291823   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:24.291834   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.291840   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.291843   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.291845   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.291848   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.291851   13103 round_trippers.go:580]     Audit-Id: 6a99ab05-94e2-492b-8af0-b2da0016e5b7
	I0906 12:21:24.291854   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.291926   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:24.292215   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:24.292222   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.292228   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.292232   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.294868   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:24.294879   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.294887   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.294891   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.294895   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.294919   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.294928   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.294942   13103 round_trippers.go:580]     Audit-Id: af0ff70b-fd71-4aeb-b3de-4315f34facb9
	I0906 12:21:24.295045   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.792021   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:24.792045   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.792056   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.792061   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.795099   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:24.795111   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.795118   13103 round_trippers.go:580]     Audit-Id: 49cd14f6-b700-4079-acc6-1c23ea6665a8
	I0906 12:21:24.795121   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.795126   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.795129   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.795134   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.795138   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.795441   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:24.795829   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:24.795839   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.795847   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.795852   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.797049   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:24.797056   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.797062   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.797067   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.797071   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.797077   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.797081   13103 round_trippers.go:580]     Audit-Id: 372e796a-fd9b-4c7f-a4f3-348e4bb85f78
	I0906 12:21:24.797084   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.797314   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.797488   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:25.291260   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:25.291308   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.291321   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.291329   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.293814   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:25.293827   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.293837   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.293845   13103 round_trippers.go:580]     Audit-Id: 0913032e-8be9-417d-bb6c-c5369ea32b94
	I0906 12:21:25.293850   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.293855   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.293858   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.293861   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.294069   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:25.294442   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.294452   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.294460   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.294472   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.295729   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.295737   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.295742   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.295745   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.295749   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.295752   13103 round_trippers.go:580]     Audit-Id: f6de5899-967e-4335-937d-b862caacaac4
	I0906 12:21:25.295755   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.295757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.295934   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.790082   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:25.790110   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.790121   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.790127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.794252   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:25.794265   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.794270   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.794274   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.794276   13103 round_trippers.go:580]     Audit-Id: 84c94fcb-2090-4820-8371-d077f05523ae
	I0906 12:21:25.794279   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.794282   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.794285   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.794608   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0906 12:21:25.794917   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.794925   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.794930   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.794933   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.797962   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:25.797972   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.797977   13103 round_trippers.go:580]     Audit-Id: 0bad9809-253e-41f5-b043-fd2cc4b28671
	I0906 12:21:25.797981   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.797983   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.797986   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.797988   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.797991   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.798051   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.798227   13103 pod_ready.go:93] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.798236   13103 pod_ready.go:82] duration metric: took 14.008395399s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.798242   13103 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.798273   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-459000
	I0906 12:21:25.798278   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.798283   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.798287   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.799854   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.799863   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.799870   13103 round_trippers.go:580]     Audit-Id: 403b7e40-de6c-49d8-bd8c-3037daef8684
	I0906 12:21:25.799876   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.799887   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.799892   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.799896   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.799899   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.800134   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"896","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0906 12:21:25.800368   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.800374   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.800379   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.800382   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.801593   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.801602   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.801608   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.801612   13103 round_trippers.go:580]     Audit-Id: cfc71669-8af7-4367-87d9-6662789b2dae
	I0906 12:21:25.801614   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.801617   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.801621   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.801624   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.801765   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.801934   13103 pod_ready.go:93] pod "etcd-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.801942   13103 pod_ready.go:82] duration metric: took 3.694957ms for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.801952   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.801981   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-459000
	I0906 12:21:25.801986   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.801991   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.801996   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.802919   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.802927   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.802934   13103 round_trippers.go:580]     Audit-Id: 63d71678-c53e-4543-b6a5-d040eec32368
	I0906 12:21:25.802942   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.802946   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.802951   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.802955   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.802960   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.803115   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-459000","namespace":"kube-system","uid":"a7ee0531-75a6-405c-928c-1185a0e5ebd0","resourceVersion":"893","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.33:8443","kubernetes.io/config.hash":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.mirror":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.seen":"2024-09-06T19:16:52.157527221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0906 12:21:25.803342   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.803349   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.803355   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.803358   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.804246   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.804252   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.804256   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.804259   13103 round_trippers.go:580]     Audit-Id: 0ac897aa-9ea8-4691-969b-24565f1cec79
	I0906 12:21:25.804262   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.804264   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.804267   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.804270   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.804446   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.804600   13103 pod_ready.go:93] pod "kube-apiserver-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.804607   13103 pod_ready.go:82] duration metric: took 2.650187ms for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.804617   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.804642   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-459000
	I0906 12:21:25.804646   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.804652   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.804656   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.805698   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.805710   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.805719   13103 round_trippers.go:580]     Audit-Id: d2e6c5a5-f0ad-4b3f-bee2-a22972423cd2
	I0906 12:21:25.805725   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.805729   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.805733   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.805738   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.805741   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.805885   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-459000","namespace":"kube-system","uid":"ef9a4034-636f-4d52-b328-40aff0e03ccb","resourceVersion":"882","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.mirror":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.seen":"2024-09-06T19:16:52.157528036Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0906 12:21:25.806107   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.806114   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.806120   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.806124   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.807056   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.807066   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.807072   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.807075   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.807089   13103 round_trippers.go:580]     Audit-Id: 035907c2-91f4-4135-ba77-18e01d4e93aa
	I0906 12:21:25.807095   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.807098   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.807100   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.807202   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.807359   13103 pod_ready.go:93] pod "kube-controller-manager-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.807367   13103 pod_ready.go:82] duration metric: took 2.745265ms for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.807373   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.807399   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:21:25.807404   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.807410   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.807414   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.808305   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.808312   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.808316   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.808320   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.808323   13103 round_trippers.go:580]     Audit-Id: e1b9f127-6568-4885-a928-a313180b5cfc
	I0906 12:21:25.808326   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.808330   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.808333   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.808489   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crzpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"253c78d8-0d56-49e8-a00c-99218c50beac","resourceVersion":"505","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:21:25.808732   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:21:25.808739   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.808746   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.808749   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.809591   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.809599   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.809603   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.809608   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.809611   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.809613   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.809616   13103 round_trippers.go:580]     Audit-Id: 12243566-34b1-46b1-8e77-91a9e8c62dc1
	I0906 12:21:25.809625   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.809714   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m02","uid":"42483c05-2f0a-48b5-a783-4c5958284f86","resourceVersion":"573","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_17_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3818 chars]
	I0906 12:21:25.809852   13103 pod_ready.go:93] pod "kube-proxy-crzpl" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.809859   13103 pod_ready.go:82] duration metric: took 2.481658ms for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.809864   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.990651   13103 request.go:632] Waited for 180.750264ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:21:25.990733   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:21:25.990745   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.990758   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.990766   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.993181   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:25.993195   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.993203   13103 round_trippers.go:580]     Audit-Id: c00f3d33-4b56-4ae6-a5e8-81c5026b67c8
	I0906 12:21:25.993206   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.993211   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.993214   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.993219   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.993223   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:25.993303   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t24bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"626397be-3b5a-4dd4-8932-283e8edb0d27","resourceVersion":"878","creationTimestamp":"2024-09-06T19:16:56Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0906 12:21:26.191274   13103 request.go:632] Waited for 197.606599ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.191412   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.191429   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.191441   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.191450   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.194207   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.194222   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.194229   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.194233   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.194237   13103 round_trippers.go:580]     Audit-Id: 9fb32ea2-3741-4f49-bb50-a2d213c3ba43
	I0906 12:21:26.194241   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.194245   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.194250   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.194413   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:26.194661   13103 pod_ready.go:93] pod "kube-proxy-t24bs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.194674   13103 pod_ready.go:82] duration metric: took 384.805332ms for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.194683   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.390345   13103 request.go:632] Waited for 195.620855ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:21:26.390407   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:21:26.390414   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.390423   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.390449   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.392604   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.392621   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.392646   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.392655   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.392658   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.392660   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.392665   13103 round_trippers.go:580]     Audit-Id: 270f284c-7338-4efa-b17a-1a10c014da62
	I0906 12:21:26.392667   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.392768   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vqcpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6","resourceVersion":"735","creationTimestamp":"2024-09-06T19:18:30Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:21:26.591656   13103 request.go:632] Waited for 198.580484ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:21:26.591735   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:21:26.591747   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.591759   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.591766   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.594204   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.594217   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.594223   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.594227   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.594230   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.594232   13103 round_trippers.go:580]     Audit-Id: 2f715b53-17f6-46aa-a414-cdfa14512543
	I0906 12:21:26.594235   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.594238   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.594320   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m03","uid":"6c54d256-cf96-4ec0-9d0b-36c85c77ef2b","resourceVersion":"760","creationTimestamp":"2024-09-06T19:19:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_19_25_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:19:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3635 chars]
	I0906 12:21:26.594506   13103 pod_ready.go:93] pod "kube-proxy-vqcpj" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.594515   13103 pod_ready.go:82] duration metric: took 399.828258ms for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.594522   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.792106   13103 request.go:632] Waited for 197.521385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:21:26.792145   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:21:26.792151   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.792159   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.792164   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.794274   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.794287   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.794292   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.794295   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.794297   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.794300   13103 round_trippers.go:580]     Audit-Id: a0b7ecef-315a-4ee1-b32f-542e84989097
	I0906 12:21:26.794310   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.794325   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.794421   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-459000","namespace":"kube-system","uid":"4602221a-c2e8-4f7d-a31e-2910196cb32b","resourceVersion":"887","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.mirror":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.seen":"2024-09-06T19:16:46.929338017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0906 12:21:26.990594   13103 request.go:632] Waited for 195.896372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.990633   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.990639   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.990649   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.990656   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.992802   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.992815   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.992820   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.992824   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.992827   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:26.992829   13103 round_trippers.go:580]     Audit-Id: 4800a535-951a-44f6-b035-5009b5db7c8d
	I0906 12:21:26.992832   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.992836   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.993044   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:26.993239   13103 pod_ready.go:93] pod "kube-scheduler-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.993248   13103 pod_ready.go:82] duration metric: took 398.723382ms for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.993255   13103 pod_ready.go:39] duration metric: took 15.207711162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:21:26.993267   13103 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:21:26.993321   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:21:27.005121   13103 command_runner.go:130] > 1692
	I0906 12:21:27.005342   13103 api_server.go:72] duration metric: took 31.982233194s to wait for apiserver process to appear ...
	I0906 12:21:27.005350   13103 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:21:27.005359   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:21:27.008362   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:21:27.008393   13103 round_trippers.go:463] GET https://192.169.0.33:8443/version
	I0906 12:21:27.008397   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.008403   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.008406   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.008898   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:27.008905   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.008910   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.008915   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.008919   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.008922   13103 round_trippers.go:580]     Content-Length: 263
	I0906 12:21:27.008927   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.008941   13103 round_trippers.go:580]     Audit-Id: afc79679-e8c5-4a0a-b383-34d3dd5cf866
	I0906 12:21:27.008945   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.008953   13103 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 12:21:27.008973   13103 api_server.go:141] control plane version: v1.31.0
	I0906 12:21:27.008981   13103 api_server.go:131] duration metric: took 3.627345ms to wait for apiserver health ...
	I0906 12:21:27.008986   13103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:21:27.192136   13103 request.go:632] Waited for 183.091553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.192271   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.192278   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.192286   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.192292   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.195706   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:27.195721   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.195729   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.195733   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.195738   13103 round_trippers.go:580]     Audit-Id: af203598-9027-4608-960f-5efe9b85e522
	I0906 12:21:27.195741   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.195757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.195761   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.196938   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89323 chars]
	I0906 12:21:27.198936   13103 system_pods.go:59] 12 kube-system pods found
	I0906 12:21:27.198946   13103 system_pods.go:61] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running
	I0906 12:21:27.198950   13103 system_pods.go:61] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running
	I0906 12:21:27.198953   13103 system_pods.go:61] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:21:27.198957   13103 system_pods.go:61] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:21:27.198959   13103 system_pods.go:61] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:21:27.198962   13103 system_pods.go:61] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running
	I0906 12:21:27.198968   13103 system_pods.go:61] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running
	I0906 12:21:27.198970   13103 system_pods.go:61] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:21:27.198973   13103 system_pods.go:61] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:21:27.198975   13103 system_pods.go:61] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:21:27.198978   13103 system_pods.go:61] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running
	I0906 12:21:27.198982   13103 system_pods.go:61] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:21:27.198989   13103 system_pods.go:74] duration metric: took 189.999782ms to wait for pod list to return data ...
	I0906 12:21:27.198995   13103 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:21:27.390207   13103 request.go:632] Waited for 191.164821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:21:27.390245   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:21:27.390252   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.390260   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.390264   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.392029   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:27.392044   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.392049   13103 round_trippers.go:580]     Audit-Id: 2fbbe1f8-a5e2-419a-8fe6-1b6b60c2c579
	I0906 12:21:27.392053   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.392056   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.392058   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.392061   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.392063   13103 round_trippers.go:580]     Content-Length: 261
	I0906 12:21:27.392066   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.392086   13103 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2b97d238-fe0f-46a4-b550-296f608e88e4","resourceVersion":"351","creationTimestamp":"2024-09-06T19:16:57Z"}}]}
	I0906 12:21:27.392202   13103 default_sa.go:45] found service account: "default"
	I0906 12:21:27.392211   13103 default_sa.go:55] duration metric: took 193.2122ms for default service account to be created ...
	I0906 12:21:27.392219   13103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:21:27.592153   13103 request.go:632] Waited for 199.860611ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.592227   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.592245   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.592256   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.592265   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.595123   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:27.595136   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.595143   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.595153   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.595157   13103 round_trippers.go:580]     Audit-Id: bffb0aa4-39bf-41e6-9363-65a6d47aff42
	I0906 12:21:27.595160   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.595164   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.595168   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.596227   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89323 chars]
	I0906 12:21:27.598193   13103 system_pods.go:86] 12 kube-system pods found
	I0906 12:21:27.598204   13103 system_pods.go:89] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running
	I0906 12:21:27.598208   13103 system_pods.go:89] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running
	I0906 12:21:27.598211   13103 system_pods.go:89] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:21:27.598214   13103 system_pods.go:89] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:21:27.598216   13103 system_pods.go:89] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:21:27.598220   13103 system_pods.go:89] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running
	I0906 12:21:27.598224   13103 system_pods.go:89] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running
	I0906 12:21:27.598227   13103 system_pods.go:89] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:21:27.598229   13103 system_pods.go:89] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:21:27.598236   13103 system_pods.go:89] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:21:27.598239   13103 system_pods.go:89] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running
	I0906 12:21:27.598243   13103 system_pods.go:89] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:21:27.598250   13103 system_pods.go:126] duration metric: took 206.027101ms to wait for k8s-apps to be running ...
	I0906 12:21:27.598262   13103 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:21:27.598315   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:21:27.609404   13103 system_svc.go:56] duration metric: took 11.137288ms WaitForService to wait for kubelet
	I0906 12:21:27.609422   13103 kubeadm.go:582] duration metric: took 32.586314845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:21:27.609435   13103 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:21:27.791184   13103 request.go:632] Waited for 181.707048ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes
	I0906 12:21:27.791256   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes
	I0906 12:21:27.791270   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.791280   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.791284   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.798698   13103 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:21:27.798713   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.798721   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.798725   13103 round_trippers.go:580]     Audit-Id: 367702bd-19ff-4848-9862-dc41de16b578
	I0906 12:21:27.798729   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.798735   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.798739   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.798744   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.798923   13103 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14655 chars]
	I0906 12:21:27.799352   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799364   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799371   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799374   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799377   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799381   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799384   13103 node_conditions.go:105] duration metric: took 189.944138ms to run NodePressure ...
	I0906 12:21:27.799392   13103 start.go:241] waiting for startup goroutines ...
	I0906 12:21:27.799399   13103 start.go:246] waiting for cluster config update ...
	I0906 12:21:27.799404   13103 start.go:255] writing updated cluster config ...
	I0906 12:21:27.821252   13103 out.go:201] 
	I0906 12:21:27.843093   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:27.843181   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:27.864582   13103 out.go:177] * Starting "multinode-459000-m02" worker node in "multinode-459000" cluster
	I0906 12:21:27.906824   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:21:27.906848   13103 cache.go:56] Caching tarball of preloaded images
	I0906 12:21:27.906988   13103 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:21:27.907000   13103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:21:27.907095   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:27.907830   13103 start.go:360] acquireMachinesLock for multinode-459000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:27.907909   13103 start.go:364] duration metric: took 62.547µs to acquireMachinesLock for "multinode-459000-m02"
	I0906 12:21:27.907926   13103 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:27.907932   13103 fix.go:54] fixHost starting: m02
	I0906 12:21:27.908283   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:21:27.908299   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:21:27.917825   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57515
	I0906 12:21:27.918176   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:21:27.918549   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:21:27.918566   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:21:27.918784   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:21:27.918904   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:21:27.918992   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetState
	I0906 12:21:27.919074   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:27.919163   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 12773
	I0906 12:21:27.920087   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 12773 missing from process table
	I0906 12:21:27.920111   13103 fix.go:112] recreateIfNeeded on multinode-459000-m02: state=Stopped err=<nil>
	I0906 12:21:27.920123   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	W0906 12:21:27.920203   13103 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:27.942601   13103 out.go:177] * Restarting existing hyperkit VM for "multinode-459000-m02" ...
	I0906 12:21:27.984774   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .Start
	I0906 12:21:27.984975   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:27.985004   13103 main.go:141] libmachine: (multinode-459000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid
	I0906 12:21:27.986238   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 12773 missing from process table
	I0906 12:21:27.986246   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | pid 12773 is in state "Stopped"
	I0906 12:21:27.986260   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid...
	I0906 12:21:27.986559   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Using UUID 656fac0c-2257-4452-9309-51b4437053c1
	I0906 12:21:28.010616   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Generated MAC fe:64:cc:9a:2e:14
	I0906 12:21:28.010637   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000
	I0906 12:21:28.010773   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"656fac0c-2257-4452-9309-51b4437053c1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0906 12:21:28.010802   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"656fac0c-2257-4452-9309-51b4437053c1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0906 12:21:28.010862   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "656fac0c-2257-4452-9309-51b4437053c1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/multinode-459000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage,/Users/j
enkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"}
	I0906 12:21:28.010908   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 656fac0c-2257-4452-9309-51b4437053c1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/multinode-459000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/mult
inode-459000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"
	I0906 12:21:28.010922   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:21:28.012308   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Pid is 13138
	I0906 12:21:28.012836   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Attempt 0
	I0906 12:21:28.012847   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:28.012954   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 13138
	I0906 12:21:28.014959   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Searching for fe:64:cc:9a:2e:14 in /var/db/dhcpd_leases ...
	I0906 12:21:28.015045   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0906 12:21:28.015075   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca76e}
	I0906 12:21:28.015090   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:21:28.015098   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca6c9}
	I0906 12:21:28.015104   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Found match: fe:64:cc:9a:2e:14
	I0906 12:21:28.015122   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | IP: 192.169.0.34
	I0906 12:21:28.015211   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetConfigRaw
	I0906 12:21:28.015984   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:21:28.016200   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:28.016694   13103 machine.go:93] provisionDockerMachine start ...
	I0906 12:21:28.016705   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:21:28.016832   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:21:28.016942   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:21:28.017045   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:21:28.017163   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:21:28.017252   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:21:28.017405   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:21:28.017574   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:21:28.017581   13103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:21:28.020425   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:21:28.028631   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:21:28.029659   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:21:28.029679   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:21:28.029689   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:21:28.029703   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:21:28.418268   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:21:28.418289   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:21:28.532958   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:21:28.532980   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:21:28.532991   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:21:28.533007   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:21:28.533853   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:21:28.533862   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:21:34.182409   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:21:34.182422   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:21:34.182441   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:21:34.205614   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:22:03.080676   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:22:03.080691   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.080823   13103 buildroot.go:166] provisioning hostname "multinode-459000-m02"
	I0906 12:22:03.080835   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.080941   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.081027   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.081123   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.081198   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.081290   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.081435   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.081584   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.081600   13103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-459000-m02 && echo "multinode-459000-m02" | sudo tee /etc/hostname
	I0906 12:22:03.147432   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-459000-m02
	
	I0906 12:22:03.147447   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.147580   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.147686   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.147777   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.147882   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.148030   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.148181   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.148193   13103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-459000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-459000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-459000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:22:03.210956   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:22:03.210971   13103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:22:03.210983   13103 buildroot.go:174] setting up certificates
	I0906 12:22:03.210989   13103 provision.go:84] configureAuth start
	I0906 12:22:03.210996   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.211127   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:22:03.211230   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.211317   13103 provision.go:143] copyHostCerts
	I0906 12:22:03.211342   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:22:03.211388   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:22:03.211393   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:22:03.211527   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:22:03.211723   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:22:03.211752   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:22:03.211757   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:22:03.211879   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:22:03.212065   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:22:03.212095   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:22:03.212100   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:22:03.212185   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:22:03.212343   13103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.multinode-459000-m02 san=[127.0.0.1 192.169.0.34 localhost minikube multinode-459000-m02]
	I0906 12:22:03.292544   13103 provision.go:177] copyRemoteCerts
	I0906 12:22:03.292595   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:22:03.292609   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.292765   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.292872   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.292982   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.293071   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:03.328230   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:22:03.328298   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:22:03.348053   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:22:03.348131   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0906 12:22:03.367639   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:22:03.367712   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:22:03.387332   13103 provision.go:87] duration metric: took 176.33502ms to configureAuth
	I0906 12:22:03.387347   13103 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:22:03.387513   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:03.387530   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:03.387682   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.387763   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.387851   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.387925   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.388009   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.388123   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.388249   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.388257   13103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:22:03.443432   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:22:03.443443   13103 buildroot.go:70] root file system type: tmpfs
	I0906 12:22:03.443517   13103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:22:03.443528   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.443676   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.443804   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.443902   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.443992   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.444119   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.444251   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.444297   13103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.33"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:22:03.511777   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.33
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:22:03.511796   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.511939   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.512046   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.512150   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.512229   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.512369   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.512523   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.512537   13103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:22:05.101095   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:22:05.101110   13103 machine.go:96] duration metric: took 37.084578612s to provisionDockerMachine
	I0906 12:22:05.101117   13103 start.go:293] postStartSetup for "multinode-459000-m02" (driver="hyperkit")
	I0906 12:22:05.101128   13103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:22:05.101143   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.101326   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:22:05.101340   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.101444   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.101546   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.101646   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.101727   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.136158   13103 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:22:05.139064   13103 command_runner.go:130] > NAME=Buildroot
	I0906 12:22:05.139075   13103 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 12:22:05.139080   13103 command_runner.go:130] > ID=buildroot
	I0906 12:22:05.139085   13103 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 12:22:05.139091   13103 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 12:22:05.139245   13103 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:22:05.139254   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:22:05.139354   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:22:05.139523   13103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:22:05.139532   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:22:05.139729   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:22:05.147744   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:22:05.167085   13103 start.go:296] duration metric: took 65.96042ms for postStartSetup
	I0906 12:22:05.167104   13103 fix.go:56] duration metric: took 37.259343707s for fixHost
	I0906 12:22:05.167120   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.167254   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.167358   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.167446   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.167521   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.167651   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:05.167820   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:05.167828   13103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:22:05.223842   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650525.359228063
	
	I0906 12:22:05.223853   13103 fix.go:216] guest clock: 1725650525.359228063
	I0906 12:22:05.223859   13103 fix.go:229] Guest: 2024-09-06 12:22:05.359228063 -0700 PDT Remote: 2024-09-06 12:22:05.16711 -0700 PDT m=+120.857961279 (delta=192.118063ms)
	I0906 12:22:05.223869   13103 fix.go:200] guest clock delta is within tolerance: 192.118063ms
	I0906 12:22:05.223874   13103 start.go:83] releasing machines lock for "multinode-459000-m02", held for 37.316129214s
	I0906 12:22:05.223892   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.224018   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:22:05.247126   13103 out.go:177] * Found network options:
	I0906 12:22:05.267149   13103 out.go:177]   - NO_PROXY=192.169.0.33
	W0906 12:22:05.288480   13103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:22:05.288517   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289464   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289709   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289822   13103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:22:05.289870   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	W0906 12:22:05.289953   13103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:22:05.290045   13103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:22:05.290049   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.290072   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.290260   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.290309   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.290487   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.290522   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.290612   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.290641   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.290732   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.322318   13103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 12:22:05.322403   13103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:22:05.322457   13103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:22:05.371219   13103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 12:22:05.371281   13103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0906 12:22:05.371302   13103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:22:05.371309   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:22:05.371372   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:22:05.386255   13103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0906 12:22:05.386586   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:22:05.395028   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:22:05.403351   13103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:22:05.403403   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:22:05.411931   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:22:05.420232   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:22:05.428446   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:22:05.436920   13103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:22:05.445773   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:22:05.453982   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:22:05.462364   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:22:05.470872   13103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:22:05.478456   13103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 12:22:05.478577   13103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:22:05.486053   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:22:05.577721   13103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:22:05.597370   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:22:05.597442   13103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:22:05.616652   13103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0906 12:22:05.617149   13103 command_runner.go:130] > [Unit]
	I0906 12:22:05.617160   13103 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 12:22:05.617165   13103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 12:22:05.617170   13103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0906 12:22:05.617176   13103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0906 12:22:05.617186   13103 command_runner.go:130] > StartLimitBurst=3
	I0906 12:22:05.617191   13103 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 12:22:05.617195   13103 command_runner.go:130] > [Service]
	I0906 12:22:05.617202   13103 command_runner.go:130] > Type=notify
	I0906 12:22:05.617206   13103 command_runner.go:130] > Restart=on-failure
	I0906 12:22:05.617209   13103 command_runner.go:130] > Environment=NO_PROXY=192.169.0.33
	I0906 12:22:05.617215   13103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 12:22:05.617224   13103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 12:22:05.617230   13103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 12:22:05.617236   13103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 12:22:05.617242   13103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 12:22:05.617248   13103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 12:22:05.617254   13103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 12:22:05.617263   13103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 12:22:05.617271   13103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 12:22:05.617274   13103 command_runner.go:130] > ExecStart=
	I0906 12:22:05.617286   13103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0906 12:22:05.617291   13103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 12:22:05.617298   13103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 12:22:05.617304   13103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 12:22:05.617308   13103 command_runner.go:130] > LimitNOFILE=infinity
	I0906 12:22:05.617312   13103 command_runner.go:130] > LimitNPROC=infinity
	I0906 12:22:05.617315   13103 command_runner.go:130] > LimitCORE=infinity
	I0906 12:22:05.617321   13103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 12:22:05.617325   13103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 12:22:05.617329   13103 command_runner.go:130] > TasksMax=infinity
	I0906 12:22:05.617332   13103 command_runner.go:130] > TimeoutStartSec=0
	I0906 12:22:05.617338   13103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 12:22:05.617341   13103 command_runner.go:130] > Delegate=yes
	I0906 12:22:05.617346   13103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 12:22:05.617354   13103 command_runner.go:130] > KillMode=process
	I0906 12:22:05.617358   13103 command_runner.go:130] > [Install]
	I0906 12:22:05.617361   13103 command_runner.go:130] > WantedBy=multi-user.target
	I0906 12:22:05.617421   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:22:05.628871   13103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:22:05.647873   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:22:05.659524   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:22:05.669927   13103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:22:05.694232   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:22:05.704881   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:22:05.719722   13103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 12:22:05.719995   13103 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:22:05.722778   13103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0906 12:22:05.722977   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:22:05.730138   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:22:05.743763   13103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:22:05.836175   13103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:22:05.941964   13103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:22:05.941990   13103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:22:05.956052   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:22:06.050692   13103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:23:07.093245   13103 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0906 12:23:07.093261   13103 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0906 12:23:07.093271   13103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.026654808s)
	I0906 12:23:07.093333   13103 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:23:07.102433   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0906 12:23:07.102446   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	I0906 12:23:07.102458   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	I0906 12:23:07.102471   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	I0906 12:23:07.102483   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	I0906 12:23:07.102493   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0906 12:23:07.102506   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0906 12:23:07.102517   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0906 12:23:07.102526   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102536   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102546   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102564   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102577   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102587   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102597   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102606   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102615   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102630   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102641   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102662   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102671   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0906 12:23:07.102680   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0906 12:23:07.102689   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	I0906 12:23:07.102699   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0906 12:23:07.102713   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0906 12:23:07.102723   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0906 12:23:07.102733   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0906 12:23:07.102743   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0906 12:23:07.102753   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0906 12:23:07.102762   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0906 12:23:07.102771   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0906 12:23:07.102780   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0906 12:23:07.102790   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0906 12:23:07.102799   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102811   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102821   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102831   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102842   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102859   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102892   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102903   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102913   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102921   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102930   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102938   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102947   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102956   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102964   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102973   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102982   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102991   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103001   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103009   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103018   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103027   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0906 12:23:07.103036   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103044   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103053   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0906 12:23:07.103063   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0906 12:23:07.103074   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0906 12:23:07.103088   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0906 12:23:07.103181   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0906 12:23:07.103192   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103203   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0906 12:23:07.103210   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	I0906 12:23:07.103218   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0906 12:23:07.103226   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0906 12:23:07.103234   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0906 12:23:07.103242   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	I0906 12:23:07.103250   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0906 12:23:07.103257   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	I0906 12:23:07.103277   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0906 12:23:07.103288   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0906 12:23:07.103301   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	I0906 12:23:07.103309   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	I0906 12:23:07.103319   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	I0906 12:23:07.103325   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	I0906 12:23:07.103332   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	I0906 12:23:07.103338   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	I0906 12:23:07.103345   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	I0906 12:23:07.103352   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	I0906 12:23:07.103361   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0906 12:23:07.103370   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	I0906 12:23:07.103379   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0906 12:23:07.103415   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0906 12:23:07.103423   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0906 12:23:07.103428   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0906 12:23:07.103433   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0906 12:23:07.103439   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0906 12:23:07.103445   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	I0906 12:23:07.103453   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0906 12:23:07.103461   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0906 12:23:07.103467   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0906 12:23:07.103473   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0906 12:23:07.127876   13103 out.go:201] 
	W0906 12:23:07.148646   13103 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:23:07.148721   13103 out.go:270] * 
	W0906 12:23:07.149793   13103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:07.211707   13103 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:21:22 multinode-459000 dockerd[845]: time="2024-09-06T19:21:22.615695116Z" level=info msg="ignoring event" container=015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:21:22 multinode-459000 dockerd[851]: time="2024-09-06T19:21:22.615971143Z" level=warning msg="cleaning up after shim disconnected" id=015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2 namespace=moby
	Sep 06 19:21:22 multinode-459000 dockerd[851]: time="2024-09-06T19:21:22.616014209Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.010796214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.010939483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.010959472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.011053858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135527858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135651991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135664327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135724235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 cri-dockerd[1098]: time="2024-09-06T19:21:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad31325ddc3d5e3ea42101967060f67540a28c4b1f41caca8f16e7b7a3a3c9fd/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.241786501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.243664310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.243737195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.243870991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 cri-dockerd[1098]: time="2024-09-06T19:21:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bba495a5518dd208171ccb9db9cceab820b1c2c235c6c1192f0651c43f53c7f7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.356855941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.357093941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.357216765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.357512098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.012811668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.012879392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.012892925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.013329730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6494a194eedc9       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   8b97e9911f708       storage-provisioner
	6df02b21eb759       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   bba495a5518dd       busybox-7dff88458-b9hnk
	ddc90f5715c82       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   ad31325ddc3d5       coredns-6f6b679f8f-m6cmh
	2dec5851e2896       12968670680f4                                                                                         2 minutes ago        Running             kindnet-cni               1                   2ecc938461a84       kindnet-255hz
	015c097641e0c       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   8b97e9911f708       storage-provisioner
	2788eae2c4b75       ad83b2ca7b09e                                                                                         2 minutes ago        Running             kube-proxy                1                   50bf8760257c4       kube-proxy-t24bs
	d96e2b3df6396       2e96e5913fc06                                                                                         2 minutes ago        Running             etcd                      1                   f3428c6375c14       etcd-multinode-459000
	a58e4533fa0ae       1766f54c897f0                                                                                         2 minutes ago        Running             kube-scheduler            1                   7edc764eee369       kube-scheduler-multinode-459000
	d35fb0e18edb3       604f5db92eaa8                                                                                         2 minutes ago        Running             kube-apiserver            1                   77a0be6b32eea       kube-apiserver-multinode-459000
	f1e4bf2515674       045733566833c                                                                                         2 minutes ago        Running             kube-controller-manager   1                   7bc49d66119d6       kube-controller-manager-multinode-459000
	eaef5d6a6c3c3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   109009d4e6323       busybox-7dff88458-b9hnk
	12b00d3e81cd0       cbb01a7bd410d                                                                                         5 minutes ago        Exited              coredns                   0                   6766a97ec06fd       coredns-6f6b679f8f-m6cmh
	b2cede164434e       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              6 minutes ago        Exited              kindnet-cni               0                   98079ff18be9c       kindnet-255hz
	e4605e60128b4       ad83b2ca7b09e                                                                                         6 minutes ago        Exited              kube-proxy                0                   68811f115b6f5       kube-proxy-t24bs
	7158af8be3418       1766f54c897f0                                                                                         6 minutes ago        Exited              kube-scheduler            0                   8455632502ed7       kube-scheduler-multinode-459000
	fde17951087f9       045733566833c                                                                                         6 minutes ago        Exited              kube-controller-manager   0                   8b8fefcb9e0b2       kube-controller-manager-multinode-459000
	487be703273e5       2e96e5913fc06                                                                                         6 minutes ago        Exited              etcd                      0                   6f313c531f3e2       etcd-multinode-459000
	95c1a9b114b11       604f5db92eaa8                                                                                         6 minutes ago        Exited              kube-apiserver            0                   03508ab110f1b       kube-apiserver-multinode-459000
	
	
	==> coredns [12b00d3e81cd] <==
	[INFO] 10.244.1.2:36981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000043001s
	[INFO] 10.244.1.2:59796 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062383s
	[INFO] 10.244.1.2:50646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076591s
	[INFO] 10.244.1.2:54430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006178s
	[INFO] 10.244.1.2:41662 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085328s
	[INFO] 10.244.1.2:51706 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000596695s
	[INFO] 10.244.1.2:52994 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040808s
	[INFO] 10.244.0.3:39411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007956s
	[INFO] 10.244.0.3:34556 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060317s
	[INFO] 10.244.0.3:60370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072655s
	[INFO] 10.244.0.3:39210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079178s
	[INFO] 10.244.1.2:55856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100259s
	[INFO] 10.244.1.2:40604 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064183s
	[INFO] 10.244.1.2:48296 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042905s
	[INFO] 10.244.1.2:53569 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063922s
	[INFO] 10.244.0.3:41096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076712s
	[INFO] 10.244.0.3:37573 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103095s
	[INFO] 10.244.0.3:59516 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071527s
	[INFO] 10.244.0.3:38561 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000066227s
	[INFO] 10.244.1.2:59777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124892s
	[INFO] 10.244.1.2:46865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000039395s
	[INFO] 10.244.1.2:35696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000036351s
	[INFO] 10.244.1.2:60341 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080309s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ddc90f5715c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58562 - 51028 "HINFO IN 63603369670783559.5709821715024449636. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.012385783s
	
	
	==> describe nodes <==
	Name:               multinode-459000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-459000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-459000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T12_16_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:16:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-459000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:23:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:16:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:16:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:16:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:21:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.33
	  Hostname:    multinode-459000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 59888293c14e47a2952ddf9c971cd2a5
	  System UUID:                01eb4f7c-0000-0000-b53d-2237e8e3c176
	  Boot ID:                    6bf9b2b1-1659-49f1-953a-d0b309ced65e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b9hnk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 coredns-6f6b679f8f-m6cmh                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-multinode-459000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-255hz                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-multinode-459000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-multinode-459000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-t24bs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-multinode-459000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m22s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m22s)  kubelet          Node multinode-459000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m22s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m16s                  kubelet          Node multinode-459000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s                  kubelet          Node multinode-459000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     6m16s                  kubelet          Node multinode-459000 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m16s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m12s                  node-controller  Node multinode-459000 event: Registered Node multinode-459000 in Controller
	  Normal  NodeReady                5m52s                  kubelet          Node multinode-459000 status is now: NodeReady
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m21s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m21s)  kubelet          Node multinode-459000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m21s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m14s                  node-controller  Node multinode-459000 event: Registered Node multinode-459000 in Controller
	
	
	Name:               multinode-459000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-459000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-459000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T12_17_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:17:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-459000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:19:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.34
	  Hostname:    multinode-459000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 88c7641a2b7841348f12d58f0355ab66
	  System UUID:                656f4452-0000-0000-9309-51b4437053c1
	  Boot ID:                    755cc985-7413-4d2f-983a-08afc00f0ddd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m65s6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kindnet-vj8hx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-proxy-crzpl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x2 over 5m29s)  kubelet          Node multinode-459000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x2 over 5m29s)  kubelet          Node multinode-459000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x2 over 5m29s)  kubelet          Node multinode-459000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node multinode-459000-m02 event: Registered Node multinode-459000-m02 in Controller
	  Normal  NodeReady                5m6s                   kubelet          Node multinode-459000-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                  node-controller  Node multinode-459000-m02 event: Registered Node multinode-459000-m02 in Controller
	  Normal  NodeNotReady             94s                    node-controller  Node multinode-459000-m02 status is now: NodeNotReady
	
	
	Name:               multinode-459000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-459000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-459000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T12_19_25_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:19:24 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-459000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:19:45 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 19:19:43 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 19:19:43 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 19:19:43 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 19:19:43 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.35
	  Hostname:    multinode-459000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b5f5f80cd0942c1893bfda04e4289ba
	  System UUID:                64d740e7-0000-0000-9e6d-2850aa3f8dd1
	  Boot ID:                    162fac96-d44a-485d-887f-501812c7473f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88j6v       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m38s
	  kube-system                 kube-proxy-vqcpj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m30s                  kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    4m38s (x2 over 4m38s)  kubelet          Node multinode-459000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s (x2 over 4m38s)  kubelet          Node multinode-459000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m38s (x2 over 4m38s)  kubelet          Node multinode-459000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                4m15s                  kubelet          Node multinode-459000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m44s (x2 over 3m44s)  kubelet          Node multinode-459000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x2 over 3m44s)  kubelet          Node multinode-459000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x2 over 3m44s)  kubelet          Node multinode-459000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m25s                  kubelet          Node multinode-459000-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                  node-controller  Node multinode-459000-m03 event: Registered Node multinode-459000-m03 in Controller
	  Normal  NodeNotReady             94s                    node-controller  Node multinode-459000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.008116] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.724977] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006928] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.840491] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.241289] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.368127] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.096645] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +1.842542] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.248713] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.115189] systemd-fstab-generator[822]: Ignoring "noauto" option for root device
	[  +0.115757] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +2.451883] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.103391] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +0.109908] systemd-fstab-generator[1075]: Ignoring "noauto" option for root device
	[  +0.053000] kauditd_printk_skb: 239 callbacks suppressed
	[  +0.077555] systemd-fstab-generator[1090]: Ignoring "noauto" option for root device
	[  +0.410106] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +1.408579] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +4.610250] kauditd_printk_skb: 128 callbacks suppressed
	[  +2.937326] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[Sep 6 19:21] kauditd_printk_skb: 72 callbacks suppressed
	[ +11.432898] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [487be703273e] <==
	{"level":"info","ts":"2024-09-06T19:16:48.607960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became leader at term 2"}
	{"level":"info","ts":"2024-09-06T19:16:48.607989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bb09f1afbf61f63 elected leader 1bb09f1afbf61f63 at term 2"}
	{"level":"info","ts":"2024-09-06T19:16:48.639274Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1bb09f1afbf61f63","local-member-attributes":"{Name:multinode-459000 ClientURLs:[https://192.169.0.33:2379]}","request-path":"/0/members/1bb09f1afbf61f63/attributes","cluster-id":"cece6eff570a9df4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:16:48.639417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:16:48.639904Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:16:48.641090Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:16:48.641117Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:16:48.646499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:16:48.641728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:16:48.647567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:16:48.650625Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:16:48.651565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.33:2379"}
	{"level":"info","ts":"2024-09-06T19:16:48.652060Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cece6eff570a9df4","local-member-id":"1bb09f1afbf61f63","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:16:48.653245Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:16:48.653358Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:19:56.487570Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T19:19:56.487605Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-459000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.33:2380"],"advertise-client-urls":["https://192.169.0.33:2379"]}
	{"level":"warn","ts":"2024-09-06T19:19:56.487653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:19:56.487778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:19:56.579680Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.33:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:19:56.579709Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.33:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:19:56.579976Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1bb09f1afbf61f63","current-leader-member-id":"1bb09f1afbf61f63"}
	{"level":"info","ts":"2024-09-06T19:19:56.586275Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:19:56.586377Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:19:56.586386Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-459000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.33:2380"],"advertise-client-urls":["https://192.169.0.33:2379"]}
	
	
	==> etcd [d96e2b3df639] <==
	{"level":"info","ts":"2024-09-06T19:20:48.959305Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:20:48.959389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:20:48.959399Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:20:48.959100Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:20:48.961760Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T19:20:48.963778Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:20:48.964314Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:20:48.964470Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1bb09f1afbf61f63","initial-advertise-peer-urls":["https://192.169.0.33:2380"],"listen-peer-urls":["https://192.169.0.33:2380"],"advertise-client-urls":["https://192.169.0.33:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.33:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:20:48.964489Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:20:49.546042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:20:49.546093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:20:49.546111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 received MsgPreVoteResp from 1bb09f1afbf61f63 at term 2"}
	{"level":"info","ts":"2024-09-06T19:20:49.546120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.546125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 received MsgVoteResp from 1bb09f1afbf61f63 at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.546132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.546313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bb09f1afbf61f63 elected leader 1bb09f1afbf61f63 at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.549923Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1bb09f1afbf61f63","local-member-attributes":"{Name:multinode-459000 ClientURLs:[https://192.169.0.33:2379]}","request-path":"/0/members/1bb09f1afbf61f63/attributes","cluster-id":"cece6eff570a9df4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:20:49.550025Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:20:49.551974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:20:49.555409Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:20:49.558334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:20:49.560187Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:20:49.560256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:20:49.577311Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:20:49.578028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.33:2379"}
	
	
	==> kernel <==
	 19:23:09 up 3 min,  0 users,  load average: 0.29, 0.18, 0.07
	Linux multinode-459000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2dec5851e289] <==
	I0906 19:22:23.742104       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:22:33.743113       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:22:33.743305       1 main.go:299] handling current node
	I0906 19:22:33.743393       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:22:33.743423       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:22:33.743576       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:22:33.743804       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:22:43.743922       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:22:43.743971       1 main.go:299] handling current node
	I0906 19:22:43.743984       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:22:43.743990       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:22:43.744320       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:22:43.744406       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:22:53.741228       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:22:53.741279       1 main.go:299] handling current node
	I0906 19:22:53.741294       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:22:53.741300       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:22:53.741474       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:22:53.741679       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:23:03.743219       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:23:03.743382       1 main.go:299] handling current node
	I0906 19:23:03.743450       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:23:03.743509       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:23:03.743688       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:23:03.743818       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b2cede164434] <==
	I0906 19:19:22.128316       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:22.128367       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:19:22.128941       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:22.128980       1 main.go:299] handling current node
	I0906 19:19:22.128992       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:22.128997       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:32.126841       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:32.126964       1 main.go:299] handling current node
	I0906 19:19:32.127063       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:32.127167       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:32.127358       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:32.127440       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:19:32.127630       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.169.0.35 Flags: [] Table: 0} 
	I0906 19:19:42.129810       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:42.129849       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:19:42.130116       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:42.130147       1 main.go:299] handling current node
	I0906 19:19:42.130156       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:42.130160       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:52.125007       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:52.125086       1 main.go:299] handling current node
	I0906 19:19:52.125104       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:52.125112       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:52.125322       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:52.125368       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [95c1a9b114b1] <==
	W0906 19:19:56.508071       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.548776       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.548892       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549003       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549069       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549163       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549227       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549290       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549383       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549481       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549544       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549685       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549749       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549832       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550068       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550143       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550196       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550251       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550304       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550358       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550413       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550468       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550727       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550786       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.554309       1 logging.go:55] [core] [Channel #16 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d35fb0e18edb] <==
	I0906 19:20:51.213655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:20:51.213849       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:20:51.218917       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:20:51.218957       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:20:51.222736       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:20:51.222770       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:20:51.222776       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:20:51.222780       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:20:51.223007       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:20:51.224883       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:20:51.256270       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:20:51.256498       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:20:51.259903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:20:51.272033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:20:51.272139       1 policy_source.go:224] refreshing policies
	I0906 19:20:51.304129       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:20:52.117425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:20:52.320268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.33]
	I0906 19:20:52.321132       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:20:52.325993       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:20:53.143451       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:20:53.270181       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:20:53.281490       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:20:53.339474       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:20:53.344943       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [f1e4bf251567] <==
	I0906 19:20:54.729543       1 shared_informer.go:320] Caches are synced for deployment
	I0906 19:20:54.742816       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 19:20:54.748869       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 19:20:54.786796       1 shared_informer.go:320] Caches are synced for cronjob
	I0906 19:20:55.177282       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:20:55.187799       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:20:55.187844       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:21:11.876376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000"
	I0906 19:21:11.876668       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:21:11.884112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000"
	I0906 19:21:14.684029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000"
	I0906 19:21:24.407113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.695µs"
	I0906 19:21:25.520620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="3.781757ms"
	I0906 19:21:25.520669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.207µs"
	I0906 19:21:25.535379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.955237ms"
	I0906 19:21:25.535680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="244.452µs"
	I0906 19:21:34.693436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:21:34.693677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:21:34.697965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m02"
	I0906 19:21:34.710040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:21:34.710499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m02"
	I0906 19:21:34.718186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.833991ms"
	I0906 19:21:34.718730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.541µs"
	I0906 19:21:39.774502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m02"
	I0906 19:21:49.864707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	
	
	==> kube-controller-manager [fde17951087f] <==
	I0906 19:18:31.854324       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-459000-m03"
	I0906 19:18:31.908645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:41.268862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:53.418273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:53.418643       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:18:53.424043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:56.866156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.843606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.854973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.997581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.997845       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:19:25.001102       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:19:25.002613       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-459000-m03\" does not exist"
	I0906 19:19:25.009068       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-459000-m03" podCIDRs=["10.244.3.0/24"]
	I0906 19:19:25.009104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:25.009119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:25.013998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:25.864408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:26.156277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:26.956082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:35.394991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:43.273729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:19:43.274191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:43.280958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:46.884789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	
	
	==> kube-proxy [2788eae2c4b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:20:52.791025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:20:52.821823       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.33"]
	E0906 19:20:52.821889       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:20:52.871039       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:20:52.871061       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:20:52.871077       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:20:52.875061       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:20:52.875526       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:20:52.875596       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:20:52.877336       1 config.go:197] "Starting service config controller"
	I0906 19:20:52.877515       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:20:52.885155       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:20:52.877746       1 config.go:326] "Starting node config controller"
	I0906 19:20:52.885623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:20:52.885628       1 shared_informer.go:320] Caches are synced for node config
	I0906 19:20:52.881916       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:20:52.887471       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:20:52.887477       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e4605e60128b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:16:58.347702       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:16:58.357397       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.33"]
	E0906 19:16:58.357448       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:16:58.407427       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:16:58.407503       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:16:58.407523       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:16:58.444815       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:16:58.446848       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:16:58.446878       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:16:58.448577       1 config.go:197] "Starting service config controller"
	I0906 19:16:58.448610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:16:58.448626       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:16:58.448630       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:16:58.450713       1 config.go:326] "Starting node config controller"
	I0906 19:16:58.450741       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:16:58.549832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:16:58.549836       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:16:58.551579       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7158af8be341] <==
	E0906 19:16:49.915023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 19:16:49.910625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 19:16:49.915284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 19:16:49.915555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:16:49.915841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:16:49.916137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:16:49.917025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:16:49.917348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 19:16:49.917930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.751396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 19:16:50.751615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.890423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:16:50.890467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.909016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:16:50.909062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.968205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:16:50.968248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 19:16:51.507806       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:19:56.483090       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a58e4533fa0a] <==
	I0906 19:20:49.414242       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:20:51.191401       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:20:51.191656       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:20:51.191788       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:20:51.191929       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:20:51.212149       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:20:51.212314       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:20:51.214484       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:20:51.215369       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:20:51.216409       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:20:51.216221       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:20:51.320542       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:21:02 multinode-459000 kubelet[1357]: E0906 19:21:02.966052    1357 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Sep 06 19:21:03 multinode-459000 kubelet[1357]: E0906 19:21:03.955113    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-b9hnk" podUID="ccbaa8a5-a216-4cec-a0bb-d3979a865144"
	Sep 06 19:21:04 multinode-459000 kubelet[1357]: E0906 19:21:04.954068    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-6f6b679f8f-m6cmh" podUID="ba4177c1-9ec9-4bab-bac7-87474036436d"
	Sep 06 19:21:05 multinode-459000 kubelet[1357]: E0906 19:21:05.955259    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-b9hnk" podUID="ccbaa8a5-a216-4cec-a0bb-d3979a865144"
	Sep 06 19:21:06 multinode-459000 kubelet[1357]: E0906 19:21:06.954990    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-6f6b679f8f-m6cmh" podUID="ba4177c1-9ec9-4bab-bac7-87474036436d"
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.619028    1357 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.619136    1357 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba4177c1-9ec9-4bab-bac7-87474036436d-config-volume podName:ba4177c1-9ec9-4bab-bac7-87474036436d nodeName:}" failed. No retries permitted until 2024-09-06 19:21:23.619108281 +0000 UTC m=+35.795384592 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ba4177c1-9ec9-4bab-bac7-87474036436d-config-volume") pod "coredns-6f6b679f8f-m6cmh" (UID: "ba4177c1-9ec9-4bab-bac7-87474036436d") : object "kube-system"/"coredns" not registered
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.720128    1357 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.720180    1357 projected.go:194] Error preparing data for projected volume kube-api-access-vsj8b for pod default/busybox-7dff88458-b9hnk: object "default"/"kube-root-ca.crt" not registered
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.720245    1357 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccbaa8a5-a216-4cec-a0bb-d3979a865144-kube-api-access-vsj8b podName:ccbaa8a5-a216-4cec-a0bb-d3979a865144 nodeName:}" failed. No retries permitted until 2024-09-06 19:21:23.720232012 +0000 UTC m=+35.896508320 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vsj8b" (UniqueName: "kubernetes.io/projected/ccbaa8a5-a216-4cec-a0bb-d3979a865144-kube-api-access-vsj8b") pod "busybox-7dff88458-b9hnk" (UID: "ccbaa8a5-a216-4cec-a0bb-d3979a865144") : object "default"/"kube-root-ca.crt" not registered
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.954390    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-b9hnk" podUID="ccbaa8a5-a216-4cec-a0bb-d3979a865144"
	Sep 06 19:21:23 multinode-459000 kubelet[1357]: I0906 19:21:23.361664    1357 scope.go:117] "RemoveContainer" containerID="b8675b45ba97ecf6fcbc195cc754e085b88aa6669460fab127e3e88567afe358"
	Sep 06 19:21:23 multinode-459000 kubelet[1357]: I0906 19:21:23.361990    1357 scope.go:117] "RemoveContainer" containerID="015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2"
	Sep 06 19:21:23 multinode-459000 kubelet[1357]: E0906 19:21:23.362429    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4e34dcf1-a1c9-464c-9680-a55570fa0319)\"" pod="kube-system/storage-provisioner" podUID="4e34dcf1-a1c9-464c-9680-a55570fa0319"
	Sep 06 19:21:33 multinode-459000 kubelet[1357]: I0906 19:21:33.954973    1357 scope.go:117] "RemoveContainer" containerID="015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2"
	Sep 06 19:21:47 multinode-459000 kubelet[1357]: E0906 19:21:47.996843    1357 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:22:47 multinode-459000 kubelet[1357]: E0906 19:22:47.988509    1357 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-459000 -n multinode-459000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-459000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (205.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (154.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 node delete m03
E0906 12:23:13.463546    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-459000 node delete m03: (2m30.689420866s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr: exit status 2 (243.960207ms)

                                                
                                                
-- stdout --
	multinode-459000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-459000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:25:41.641103   13207 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:25:41.641390   13207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:25:41.641397   13207 out.go:358] Setting ErrFile to fd 2...
	I0906 12:25:41.641400   13207 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:25:41.641582   13207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:25:41.641768   13207 out.go:352] Setting JSON to false
	I0906 12:25:41.641796   13207 mustload.go:65] Loading cluster: multinode-459000
	I0906 12:25:41.641837   13207 notify.go:220] Checking for updates...
	I0906 12:25:41.642091   13207 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:25:41.642106   13207 status.go:255] checking status of multinode-459000 ...
	I0906 12:25:41.642523   13207 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:25:41.642581   13207 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:25:41.651664   13207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57582
	I0906 12:25:41.652050   13207 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:25:41.652482   13207 main.go:141] libmachine: Using API Version  1
	I0906 12:25:41.652505   13207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:25:41.652769   13207 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:25:41.652903   13207 main.go:141] libmachine: (multinode-459000) Calling .GetState
	I0906 12:25:41.652990   13207 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:25:41.653073   13207 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 13116
	I0906 12:25:41.654029   13207 status.go:330] multinode-459000 host status = "Running" (err=<nil>)
	I0906 12:25:41.654047   13207 host.go:66] Checking if "multinode-459000" exists ...
	I0906 12:25:41.654296   13207 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:25:41.654320   13207 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:25:41.662824   13207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57584
	I0906 12:25:41.663158   13207 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:25:41.663507   13207 main.go:141] libmachine: Using API Version  1
	I0906 12:25:41.663519   13207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:25:41.663732   13207 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:25:41.663845   13207 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:25:41.663921   13207 host.go:66] Checking if "multinode-459000" exists ...
	I0906 12:25:41.664165   13207 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:25:41.664186   13207 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:25:41.672798   13207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57586
	I0906 12:25:41.673174   13207 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:25:41.673575   13207 main.go:141] libmachine: Using API Version  1
	I0906 12:25:41.673594   13207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:25:41.673809   13207 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:25:41.673913   13207 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:25:41.674063   13207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:25:41.674087   13207 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:25:41.674169   13207 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:25:41.674255   13207 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:25:41.674339   13207 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:25:41.674419   13207 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:25:41.711571   13207 ssh_runner.go:195] Run: systemctl --version
	I0906 12:25:41.715990   13207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:25:41.728138   13207 kubeconfig.go:125] found "multinode-459000" server: "https://192.169.0.33:8443"
	I0906 12:25:41.728164   13207 api_server.go:166] Checking apiserver status ...
	I0906 12:25:41.728199   13207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:25:41.739746   13207 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1692/cgroup
	W0906 12:25:41.747693   13207 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1692/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:25:41.747743   13207 ssh_runner.go:195] Run: ls
	I0906 12:25:41.750996   13207 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:25:41.754068   13207 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:25:41.754078   13207 status.go:422] multinode-459000 apiserver status = Running (err=<nil>)
	I0906 12:25:41.754087   13207 status.go:257] multinode-459000 status: &{Name:multinode-459000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:25:41.754098   13207 status.go:255] checking status of multinode-459000-m02 ...
	I0906 12:25:41.754366   13207 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:25:41.754386   13207 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:25:41.762994   13207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57591
	I0906 12:25:41.763295   13207 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:25:41.763599   13207 main.go:141] libmachine: Using API Version  1
	I0906 12:25:41.763609   13207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:25:41.763841   13207 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:25:41.763969   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .GetState
	I0906 12:25:41.764045   13207 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:25:41.764132   13207 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 13138
	I0906 12:25:41.765103   13207 status.go:330] multinode-459000-m02 host status = "Running" (err=<nil>)
	I0906 12:25:41.765113   13207 host.go:66] Checking if "multinode-459000-m02" exists ...
	I0906 12:25:41.765379   13207 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:25:41.765402   13207 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:25:41.773928   13207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57593
	I0906 12:25:41.774252   13207 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:25:41.774585   13207 main.go:141] libmachine: Using API Version  1
	I0906 12:25:41.774608   13207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:25:41.774815   13207 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:25:41.774924   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:25:41.775000   13207 host.go:66] Checking if "multinode-459000-m02" exists ...
	I0906 12:25:41.775242   13207 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:25:41.775266   13207 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:25:41.783665   13207 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57595
	I0906 12:25:41.784006   13207 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:25:41.784352   13207 main.go:141] libmachine: Using API Version  1
	I0906 12:25:41.784365   13207 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:25:41.784585   13207 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:25:41.784713   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:25:41.784875   13207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:25:41.784888   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:25:41.784972   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:25:41.785053   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:25:41.785150   13207 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:25:41.785230   13207 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:25:41.817332   13207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:25:41.827519   13207 status.go:257] multinode-459000-m02 status: &{Name:multinode-459000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-459000 -n multinode-459000
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-459000 logs -n 25: (2.853497248s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                            |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000:/home/docker/cp-test_multinode-459000-m02_multinode-459000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000 sudo cat                                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /home/docker/cp-test_multinode-459000-m02_multinode-459000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03:/home/docker/cp-test_multinode-459000-m02_multinode-459000-m03.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m02 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000-m03 sudo cat                                                                      | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /home/docker/cp-test_multinode-459000-m02_multinode-459000-m03.txt                                                         |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp testdata/cp-test.txt                                                                                   | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03:/home/docker/cp-test.txt                                                                              |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:18 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:18 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000:/home/docker/cp-test_multinode-459000-m03_multinode-459000.txt                                            |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000 sudo cat                                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | /home/docker/cp-test_multinode-459000-m03_multinode-459000.txt                                                             |                  |         |         |                     |                     |
	| cp      | multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt                                                          | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000-m02:/home/docker/cp-test_multinode-459000-m03_multinode-459000-m02.txt                                    |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n                                                                                                    | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | multinode-459000-m03 sudo cat                                                                                              |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                   |                  |         |         |                     |                     |
	| ssh     | multinode-459000 ssh -n multinode-459000-m02 sudo cat                                                                      | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | /home/docker/cp-test_multinode-459000-m03_multinode-459000-m02.txt                                                         |                  |         |         |                     |                     |
	| node    | multinode-459000 node stop m03                                                                                             | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	| node    | multinode-459000 node start                                                                                                | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:19 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                 |                  |         |         |                     |                     |
	| node    | list -p multinode-459000                                                                                                   | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT |                     |
	| stop    | -p multinode-459000                                                                                                        | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:19 PDT | 06 Sep 24 12:20 PDT |
	| start   | -p multinode-459000                                                                                                        | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:20 PDT |                     |
	|         | --wait=true -v=8                                                                                                           |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                          |                  |         |         |                     |                     |
	| node    | list -p multinode-459000                                                                                                   | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:23 PDT |                     |
	| node    | multinode-459000 node delete                                                                                               | multinode-459000 | jenkins | v1.34.0 | 06 Sep 24 12:23 PDT | 06 Sep 24 12:25 PDT |
	|         | m03                                                                                                                        |                  |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 12:20:04
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 12:20:04.345863   13103 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:20:04.346053   13103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:04.346060   13103 out.go:358] Setting ErrFile to fd 2...
	I0906 12:20:04.346064   13103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:20:04.346235   13103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:20:04.347624   13103 out.go:352] Setting JSON to false
	I0906 12:20:04.372597   13103 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11975,"bootTime":1725638429,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 12:20:04.372699   13103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 12:20:04.394472   13103 out.go:177] * [multinode-459000] minikube v1.34.0 on Darwin 14.6.1
	I0906 12:20:04.436211   13103 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 12:20:04.436276   13103 notify.go:220] Checking for updates...
	I0906 12:20:04.478971   13103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:04.499819   13103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 12:20:04.521129   13103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 12:20:04.542343   13103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 12:20:04.563008   13103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 12:20:04.584955   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:04.585128   13103 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 12:20:04.585775   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.585861   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:20:04.595482   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57485
	I0906 12:20:04.595845   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:20:04.596336   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:20:04.596353   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:20:04.596616   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:20:04.596748   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.625251   13103 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 12:20:04.667227   13103 start.go:297] selected driver: hyperkit
	I0906 12:20:04.667254   13103 start.go:901] validating driver "hyperkit" against &{Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-4590
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:04.667526   13103 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 12:20:04.667707   13103 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:20:04.667925   13103 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 12:20:04.677596   13103 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 12:20:04.681720   13103 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.681741   13103 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 12:20:04.684904   13103 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:20:04.684944   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:04.684957   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:04.685037   13103 start.go:340] cluster config:
	{Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logview
er:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:04.685143   13103 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 12:20:04.727037   13103 out.go:177] * Starting "multinode-459000" primary control-plane node in "multinode-459000" cluster
	I0906 12:20:04.748083   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:20:04.748146   13103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 12:20:04.748175   13103 cache.go:56] Caching tarball of preloaded images
	I0906 12:20:04.748360   13103 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:20:04.748383   13103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:20:04.748522   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:20:04.749240   13103 start.go:360] acquireMachinesLock for multinode-459000: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:20:04.749328   13103 start.go:364] duration metric: took 55.823µs to acquireMachinesLock for "multinode-459000"
	I0906 12:20:04.749345   13103 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:20:04.749357   13103 fix.go:54] fixHost starting: 
	I0906 12:20:04.749579   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:20:04.749598   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:20:04.758425   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57487
	I0906 12:20:04.758777   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:20:04.759147   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:20:04.759162   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:20:04.759382   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:20:04.759508   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.759613   13103 main.go:141] libmachine: (multinode-459000) Calling .GetState
	I0906 12:20:04.759719   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.759791   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 12754
	I0906 12:20:04.760733   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 12754 missing from process table
	I0906 12:20:04.760763   13103 fix.go:112] recreateIfNeeded on multinode-459000: state=Stopped err=<nil>
	I0906 12:20:04.760785   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	W0906 12:20:04.760890   13103 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:20:04.802907   13103 out.go:177] * Restarting existing hyperkit VM for "multinode-459000" ...
	I0906 12:20:04.824117   13103 main.go:141] libmachine: (multinode-459000) Calling .Start
	I0906 12:20:04.824350   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.824408   13103 main.go:141] libmachine: (multinode-459000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid
	I0906 12:20:04.826541   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 12754 missing from process table
	I0906 12:20:04.826557   13103 main.go:141] libmachine: (multinode-459000) DBG | pid 12754 is in state "Stopped"
	I0906 12:20:04.826571   13103 main.go:141] libmachine: (multinode-459000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid...
	I0906 12:20:04.827002   13103 main.go:141] libmachine: (multinode-459000) DBG | Using UUID 01eb6722-41be-4f7c-b53d-2237e8e3c176
	I0906 12:20:04.935555   13103 main.go:141] libmachine: (multinode-459000) DBG | Generated MAC 3a:dc:bb:38:e3:28
	I0906 12:20:04.935584   13103 main.go:141] libmachine: (multinode-459000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000
	I0906 12:20:04.935690   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"01eb6722-41be-4f7c-b53d-2237e8e3c176", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0906 12:20:04.935723   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"01eb6722-41be-4f7c-b53d-2237e8e3c176", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4690)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0906 12:20:04.935758   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "01eb6722-41be-4f7c-b53d-2237e8e3c176", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/multinode-459000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage,/Users/jenkins/minikube-integration/1957
6-7784/.minikube/machines/multinode-459000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"}
	I0906 12:20:04.935794   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 01eb6722-41be-4f7c-b53d-2237e8e3c176 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/multinode-459000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"
	I0906 12:20:04.935811   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:20:04.937295   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 DEBUG: hyperkit: Pid is 13116
	I0906 12:20:04.937708   13103 main.go:141] libmachine: (multinode-459000) DBG | Attempt 0
	I0906 12:20:04.937719   13103 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:20:04.937806   13103 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 13116
	I0906 12:20:04.939357   13103 main.go:141] libmachine: (multinode-459000) DBG | Searching for 3a:dc:bb:38:e3:28 in /var/db/dhcpd_leases ...
	I0906 12:20:04.939446   13103 main.go:141] libmachine: (multinode-459000) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0906 12:20:04.939476   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:20:04.939495   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca6c9}
	I0906 12:20:04.939523   13103 main.go:141] libmachine: (multinode-459000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca68b}
	I0906 12:20:04.939530   13103 main.go:141] libmachine: (multinode-459000) DBG | Found match: 3a:dc:bb:38:e3:28
	I0906 12:20:04.939550   13103 main.go:141] libmachine: (multinode-459000) DBG | IP: 192.169.0.33
	I0906 12:20:04.939615   13103 main.go:141] libmachine: (multinode-459000) Calling .GetConfigRaw
	I0906 12:20:04.940318   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:04.940491   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:20:04.940980   13103 machine.go:93] provisionDockerMachine start ...
	I0906 12:20:04.940993   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:04.941161   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:04.941289   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:04.941397   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:04.941519   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:04.941644   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:04.941784   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:04.941989   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:04.941997   13103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:20:04.945527   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:20:04.997276   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:20:04.997987   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:20:04.998001   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:20:04.998009   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:20:04.998017   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:20:05.390023   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:20:05.390038   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:20:05.504740   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:20:05.504761   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:20:05.504773   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:20:05.504793   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:20:05.505682   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:20:05.505706   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:20:11.126600   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:20:11.126629   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:20:11.126642   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:20:11.150792   13103 main.go:141] libmachine: (multinode-459000) DBG | 2024/09/06 12:20:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:20:40.017036   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:20:40.017050   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.017188   13103 buildroot.go:166] provisioning hostname "multinode-459000"
	I0906 12:20:40.017198   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.017332   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.017423   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.017512   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.017602   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.017716   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.017845   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.017999   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.018007   13103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-459000 && echo "multinode-459000" | sudo tee /etc/hostname
	I0906 12:20:40.096089   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-459000
	
	I0906 12:20:40.096107   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.096242   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.096342   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.096426   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.096502   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.096618   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.096770   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.096781   13103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-459000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-459000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-459000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:20:40.169206   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:20:40.169225   13103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:20:40.169241   13103 buildroot.go:174] setting up certificates
	I0906 12:20:40.169250   13103 provision.go:84] configureAuth start
	I0906 12:20:40.169257   13103 main.go:141] libmachine: (multinode-459000) Calling .GetMachineName
	I0906 12:20:40.169406   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:40.169492   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.169576   13103 provision.go:143] copyHostCerts
	I0906 12:20:40.169605   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:20:40.169676   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:20:40.169683   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:20:40.170064   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:20:40.170273   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:20:40.170315   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:20:40.170320   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:20:40.170402   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:20:40.170550   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:20:40.170592   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:20:40.170597   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:20:40.170676   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:20:40.170820   13103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.multinode-459000 san=[127.0.0.1 192.169.0.33 localhost minikube multinode-459000]
	I0906 12:20:40.232666   13103 provision.go:177] copyRemoteCerts
	I0906 12:20:40.232717   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:20:40.232731   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.232854   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.232974   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.233068   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.233156   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:40.274812   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:20:40.274888   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:20:40.293995   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:20:40.294068   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:20:40.313187   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:20:40.313258   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 12:20:40.332207   13103 provision.go:87] duration metric: took 162.943562ms to configureAuth
	I0906 12:20:40.332219   13103 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:20:40.332387   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:40.332402   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:40.332534   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.332628   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.332709   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.332780   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.332850   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.332965   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.333093   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.333100   13103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:20:40.400358   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:20:40.400369   13103 buildroot.go:70] root file system type: tmpfs
	I0906 12:20:40.400464   13103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:20:40.400477   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.400616   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.400716   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.400806   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.400897   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.401035   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.401181   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.401224   13103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:20:40.478937   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:20:40.478956   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:40.479091   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:40.479178   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.479269   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:40.479347   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:40.479476   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:40.479629   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:40.479640   13103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:20:42.127114   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:20:42.127129   13103 machine.go:96] duration metric: took 37.186310804s to provisionDockerMachine
	I0906 12:20:42.127143   13103 start.go:293] postStartSetup for "multinode-459000" (driver="hyperkit")
	I0906 12:20:42.127150   13103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:20:42.127165   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.127347   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:20:42.127361   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.127444   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.127542   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.127636   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.127724   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.166901   13103 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:20:42.169868   13103 command_runner.go:130] > NAME=Buildroot
	I0906 12:20:42.169887   13103 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 12:20:42.169893   13103 command_runner.go:130] > ID=buildroot
	I0906 12:20:42.169899   13103 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 12:20:42.169908   13103 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 12:20:42.170001   13103 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:20:42.170014   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:20:42.170122   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:20:42.170312   13103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:20:42.170318   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:20:42.170518   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:20:42.178002   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:20:42.197689   13103 start.go:296] duration metric: took 70.537804ms for postStartSetup
	I0906 12:20:42.197709   13103 fix.go:56] duration metric: took 37.448529222s for fixHost
	I0906 12:20:42.197720   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.197863   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.197977   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.198074   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.198146   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.198279   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:20:42.198417   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0906 12:20:42.198424   13103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:20:42.262511   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650442.394217988
	
	I0906 12:20:42.262522   13103 fix.go:216] guest clock: 1725650442.394217988
	I0906 12:20:42.262528   13103 fix.go:229] Guest: 2024-09-06 12:20:42.394217988 -0700 PDT Remote: 2024-09-06 12:20:42.197712 -0700 PDT m=+37.888180409 (delta=196.505988ms)
	I0906 12:20:42.262551   13103 fix.go:200] guest clock delta is within tolerance: 196.505988ms
	I0906 12:20:42.262555   13103 start.go:83] releasing machines lock for "multinode-459000", held for 37.513393533s
	I0906 12:20:42.262575   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.262704   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:42.262819   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263209   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263322   13103 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:20:42.263421   13103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:20:42.263463   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.263466   13103 ssh_runner.go:195] Run: cat /version.json
	I0906 12:20:42.263476   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:20:42.263583   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.263606   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:20:42.263691   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.263709   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:20:42.263807   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.263822   13103 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:20:42.263897   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.263913   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:20:42.349749   13103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 12:20:42.349790   13103 command_runner.go:130] > {"iso_version": "v1.34.0", "kicbase_version": "v0.0.44-1724862063-19530", "minikube_version": "v1.34.0", "commit": "613a681f9f90c87e637792fcb55bc4d32fe5c29c"}
	I0906 12:20:42.349946   13103 ssh_runner.go:195] Run: systemctl --version
	I0906 12:20:42.354330   13103 command_runner.go:130] > systemd 252 (252)
	I0906 12:20:42.354353   13103 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0906 12:20:42.354539   13103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:20:42.358516   13103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 12:20:42.358541   13103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:20:42.358584   13103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:20:42.371660   13103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0906 12:20:42.371693   13103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:20:42.371706   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:20:42.371808   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:20:42.386518   13103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0906 12:20:42.386805   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:20:42.395515   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:20:42.404507   13103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:20:42.404553   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:20:42.413199   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:20:42.422017   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:20:42.430768   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:20:42.439534   13103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:20:42.448644   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:20:42.457341   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:20:42.465857   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:20:42.474621   13103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:20:42.482317   13103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 12:20:42.482490   13103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:20:42.490267   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:42.589095   13103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:20:42.608272   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:20:42.608350   13103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:20:42.622568   13103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0906 12:20:42.622694   13103 command_runner.go:130] > [Unit]
	I0906 12:20:42.622704   13103 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 12:20:42.622712   13103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 12:20:42.622718   13103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0906 12:20:42.622723   13103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0906 12:20:42.622727   13103 command_runner.go:130] > StartLimitBurst=3
	I0906 12:20:42.622731   13103 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 12:20:42.622734   13103 command_runner.go:130] > [Service]
	I0906 12:20:42.622737   13103 command_runner.go:130] > Type=notify
	I0906 12:20:42.622740   13103 command_runner.go:130] > Restart=on-failure
	I0906 12:20:42.622747   13103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 12:20:42.622754   13103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 12:20:42.622761   13103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 12:20:42.622766   13103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 12:20:42.622771   13103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 12:20:42.622777   13103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 12:20:42.622784   13103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 12:20:42.622791   13103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 12:20:42.622797   13103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 12:20:42.622806   13103 command_runner.go:130] > ExecStart=
	I0906 12:20:42.622822   13103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0906 12:20:42.622829   13103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 12:20:42.622836   13103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 12:20:42.622842   13103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 12:20:42.622845   13103 command_runner.go:130] > LimitNOFILE=infinity
	I0906 12:20:42.622850   13103 command_runner.go:130] > LimitNPROC=infinity
	I0906 12:20:42.622853   13103 command_runner.go:130] > LimitCORE=infinity
	I0906 12:20:42.622858   13103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 12:20:42.622862   13103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 12:20:42.622866   13103 command_runner.go:130] > TasksMax=infinity
	I0906 12:20:42.622882   13103 command_runner.go:130] > TimeoutStartSec=0
	I0906 12:20:42.622891   13103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 12:20:42.622895   13103 command_runner.go:130] > Delegate=yes
	I0906 12:20:42.622900   13103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 12:20:42.622904   13103 command_runner.go:130] > KillMode=process
	I0906 12:20:42.622908   13103 command_runner.go:130] > [Install]
	I0906 12:20:42.622922   13103 command_runner.go:130] > WantedBy=multi-user.target
	I0906 12:20:42.623046   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:20:42.635107   13103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:20:42.650265   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:20:42.660442   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:20:42.670557   13103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:20:42.687733   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:20:42.698135   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:20:42.712589   13103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 12:20:42.712891   13103 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:20:42.715807   13103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0906 12:20:42.715876   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:20:42.723104   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:20:42.736529   13103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:20:42.845157   13103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:20:42.954660   13103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:20:42.954733   13103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:20:42.970878   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:43.069021   13103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:20:45.394719   13103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.325687442s)
	I0906 12:20:45.394781   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 12:20:45.405825   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:20:45.415611   13103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 12:20:45.518550   13103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 12:20:45.620332   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:45.730400   13103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 12:20:45.744586   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 12:20:45.756085   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:45.867521   13103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 12:20:45.926066   13103 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 12:20:45.926144   13103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 12:20:45.930542   13103 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0906 12:20:45.930554   13103 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 12:20:45.930559   13103 command_runner.go:130] > Device: 0,22	Inode: 771         Links: 1
	I0906 12:20:45.930564   13103 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0906 12:20:45.930568   13103 command_runner.go:130] > Access: 2024-09-06 19:20:46.012191218 +0000
	I0906 12:20:45.930573   13103 command_runner.go:130] > Modify: 2024-09-06 19:20:46.012191218 +0000
	I0906 12:20:45.930577   13103 command_runner.go:130] > Change: 2024-09-06 19:20:46.014191220 +0000
	I0906 12:20:45.930581   13103 command_runner.go:130] >  Birth: -
	I0906 12:20:45.930604   13103 start.go:563] Will wait 60s for crictl version
	I0906 12:20:45.930645   13103 ssh_runner.go:195] Run: which crictl
	I0906 12:20:45.933399   13103 command_runner.go:130] > /usr/bin/crictl
	I0906 12:20:45.933622   13103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 12:20:45.962193   13103 command_runner.go:130] > Version:  0.1.0
	I0906 12:20:45.962207   13103 command_runner.go:130] > RuntimeName:  docker
	I0906 12:20:45.962210   13103 command_runner.go:130] > RuntimeVersion:  27.2.0
	I0906 12:20:45.962214   13103 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 12:20:45.963280   13103 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 12:20:45.963347   13103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:20:45.981353   13103 command_runner.go:130] > 27.2.0
	I0906 12:20:45.982262   13103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 12:20:45.999044   13103 command_runner.go:130] > 27.2.0
	I0906 12:20:46.023107   13103 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 12:20:46.023157   13103 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:20:46.023538   13103 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0906 12:20:46.028008   13103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:20:46.038612   13103 kubeadm.go:883] updating cluster {Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 12:20:46.038697   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:20:46.038752   13103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:20:46.051833   13103 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0906 12:20:46.051846   13103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:20:46.051850   13103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:20:46.051855   13103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:20:46.051858   13103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:20:46.051862   13103 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0906 12:20:46.051865   13103 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0906 12:20:46.051871   13103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:20:46.051877   13103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:20:46.051882   13103 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 12:20:46.051948   13103 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:20:46.051957   13103 docker.go:615] Images already preloaded, skipping extraction
	I0906 12:20:46.052037   13103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 12:20:46.064745   13103 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0906 12:20:46.064758   13103 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.0
	I0906 12:20:46.064762   13103 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 12:20:46.064766   13103 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.0
	I0906 12:20:46.064769   13103 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.0
	I0906 12:20:46.064773   13103 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0906 12:20:46.064776   13103 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0906 12:20:46.064787   13103 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0906 12:20:46.064792   13103 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 12:20:46.064796   13103 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0906 12:20:46.065514   13103 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0906 12:20:46.065534   13103 cache_images.go:84] Images are preloaded, skipping loading
	I0906 12:20:46.065544   13103 kubeadm.go:934] updating node { 192.169.0.33 8443 v1.31.0 docker true true} ...
	I0906 12:20:46.065620   13103 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-459000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 12:20:46.065684   13103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 12:20:46.101901   13103 command_runner.go:130] > cgroupfs
	I0906 12:20:46.102506   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:46.102517   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:46.102527   13103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 12:20:46.102543   13103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.33 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-459000 NodeName:multinode-459000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 12:20:46.102625   13103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-459000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 12:20:46.102686   13103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 12:20:46.111110   13103 command_runner.go:130] > kubeadm
	I0906 12:20:46.111117   13103 command_runner.go:130] > kubectl
	I0906 12:20:46.111120   13103 command_runner.go:130] > kubelet
	I0906 12:20:46.111230   13103 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 12:20:46.111277   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 12:20:46.119320   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0906 12:20:46.132438   13103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 12:20:46.146346   13103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0906 12:20:46.160046   13103 ssh_runner.go:195] Run: grep 192.169.0.33	control-plane.minikube.internal$ /etc/hosts
	I0906 12:20:46.162862   13103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 12:20:46.172928   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:46.273763   13103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:20:46.288239   13103 certs.go:68] Setting up /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000 for IP: 192.169.0.33
	I0906 12:20:46.288251   13103 certs.go:194] generating shared ca certs ...
	I0906 12:20:46.288261   13103 certs.go:226] acquiring lock for ca certs: {Name:mkbfbded896cc2be6cbfdef56fd391f1f98e6d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:46.288443   13103 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key
	I0906 12:20:46.288516   13103 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key
	I0906 12:20:46.288526   13103 certs.go:256] generating profile certs ...
	I0906 12:20:46.288635   13103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.key
	I0906 12:20:46.288722   13103 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key.154086e5
	I0906 12:20:46.288789   13103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key
	I0906 12:20:46.288802   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 12:20:46.288824   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 12:20:46.288840   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 12:20:46.288861   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 12:20:46.288878   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 12:20:46.288913   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 12:20:46.288942   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 12:20:46.288960   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 12:20:46.289058   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem (1338 bytes)
	W0906 12:20:46.289106   13103 certs.go:480] ignoring /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364_empty.pem, impossibly tiny 0 bytes
	I0906 12:20:46.289115   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 12:20:46.289188   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem (1078 bytes)
	I0906 12:20:46.289239   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem (1123 bytes)
	I0906 12:20:46.289279   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem (1675 bytes)
	I0906 12:20:46.289387   13103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:20:46.289437   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.289463   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem -> /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.289483   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.289983   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 12:20:46.323599   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 12:20:46.349693   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 12:20:46.380553   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 12:20:46.405494   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 12:20:46.425404   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 12:20:46.445154   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 12:20:46.464970   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 12:20:46.484693   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 12:20:46.504348   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/8364.pem --> /usr/share/ca-certificates/8364.pem (1338 bytes)
	I0906 12:20:46.523910   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /usr/share/ca-certificates/83642.pem (1708 bytes)
	I0906 12:20:46.543476   13103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 12:20:46.556852   13103 ssh_runner.go:195] Run: openssl version
	I0906 12:20:46.560972   13103 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0906 12:20:46.561024   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/83642.pem && ln -fs /usr/share/ca-certificates/83642.pem /etc/ssl/certs/83642.pem"
	I0906 12:20:46.569323   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572714   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572823   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:50 /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.572861   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/83642.pem
	I0906 12:20:46.576889   13103 command_runner.go:130] > 3ec20f2e
	I0906 12:20:46.577051   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/83642.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 12:20:46.585363   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 12:20:46.593723   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.596951   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.597034   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:29 /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.597071   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 12:20:46.601216   13103 command_runner.go:130] > b5213941
	I0906 12:20:46.601259   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 12:20:46.609583   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8364.pem && ln -fs /usr/share/ca-certificates/8364.pem /etc/ssl/certs/8364.pem"
	I0906 12:20:46.618022   13103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621405   13103 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621429   13103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:50 /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.621461   13103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8364.pem
	I0906 12:20:46.625579   13103 command_runner.go:130] > 51391683
	I0906 12:20:46.625701   13103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8364.pem /etc/ssl/certs/51391683.0"
	I0906 12:20:46.634117   13103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:20:46.637570   13103 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 12:20:46.637580   13103 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0906 12:20:46.637586   13103 command_runner.go:130] > Device: 253,1	Inode: 3148599     Links: 1
	I0906 12:20:46.637591   13103 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 12:20:46.637598   13103 command_runner.go:130] > Access: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637604   13103 command_runner.go:130] > Modify: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637608   13103 command_runner.go:130] > Change: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637612   13103 command_runner.go:130] >  Birth: 2024-09-06 19:16:43.457303604 +0000
	I0906 12:20:46.637725   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 12:20:46.642003   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.642072   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 12:20:46.646243   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.646295   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 12:20:46.650659   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.650716   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 12:20:46.654983   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.655072   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 12:20:46.659282   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.659324   13103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 12:20:46.663431   13103 command_runner.go:130] > Certificate will not expire
	I0906 12:20:46.663587   13103 kubeadm.go:392] StartCluster: {Name:multinode-459000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-459000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.34 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.35 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provis
ioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 12:20:46.663700   13103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:20:46.680120   13103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 12:20:46.687982   13103 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0906 12:20:46.687996   13103 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0906 12:20:46.688003   13103 command_runner.go:130] > /var/lib/minikube/etcd:
	I0906 12:20:46.688008   13103 command_runner.go:130] > member
	I0906 12:20:46.688054   13103 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 12:20:46.688064   13103 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 12:20:46.688107   13103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 12:20:46.695454   13103 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:20:46.695768   13103 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-459000" does not appear in /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:46.695853   13103 kubeconfig.go:62] /Users/jenkins/minikube-integration/19576-7784/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-459000" cluster setting kubeconfig missing "multinode-459000" context setting]
	I0906 12:20:46.696079   13103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:46.696780   13103 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:46.696975   13103 kapi.go:59] client config for multinode-459000: &rest.Config{Host:"https://192.169.0.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/client.key", CAFile:"/Users/jenkins/minikube-integration/19576-7784/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa883ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 12:20:46.697305   13103 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 12:20:46.697478   13103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 12:20:46.704887   13103 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.33
	I0906 12:20:46.704905   13103 kubeadm.go:1160] stopping kube-system containers ...
	I0906 12:20:46.704959   13103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 12:20:46.723369   13103 command_runner.go:130] > 12b00d3e81cd
	I0906 12:20:46.723381   13103 command_runner.go:130] > b8675b45ba97
	I0906 12:20:46.723384   13103 command_runner.go:130] > 0516c7173c76
	I0906 12:20:46.723387   13103 command_runner.go:130] > 6766a97ec06f
	I0906 12:20:46.723391   13103 command_runner.go:130] > b2cede164434
	I0906 12:20:46.723394   13103 command_runner.go:130] > e4605e60128b
	I0906 12:20:46.723411   13103 command_runner.go:130] > 98079ff18be9
	I0906 12:20:46.723418   13103 command_runner.go:130] > 68811f115b6f
	I0906 12:20:46.723422   13103 command_runner.go:130] > 7158af8be341
	I0906 12:20:46.723426   13103 command_runner.go:130] > fde17951087f
	I0906 12:20:46.723432   13103 command_runner.go:130] > 487be703273e
	I0906 12:20:46.723435   13103 command_runner.go:130] > 95c1a9b114b1
	I0906 12:20:46.723445   13103 command_runner.go:130] > 03508ab110f1
	I0906 12:20:46.723449   13103 command_runner.go:130] > 8b8fefcb9e0b
	I0906 12:20:46.723452   13103 command_runner.go:130] > 6f313c531f3e
	I0906 12:20:46.723455   13103 command_runner.go:130] > 8455632502ed
	I0906 12:20:46.724125   13103 docker.go:483] Stopping containers: [12b00d3e81cd b8675b45ba97 0516c7173c76 6766a97ec06f b2cede164434 e4605e60128b 98079ff18be9 68811f115b6f 7158af8be341 fde17951087f 487be703273e 95c1a9b114b1 03508ab110f1 8b8fefcb9e0b 6f313c531f3e 8455632502ed]
	I0906 12:20:46.724190   13103 ssh_runner.go:195] Run: docker stop 12b00d3e81cd b8675b45ba97 0516c7173c76 6766a97ec06f b2cede164434 e4605e60128b 98079ff18be9 68811f115b6f 7158af8be341 fde17951087f 487be703273e 95c1a9b114b1 03508ab110f1 8b8fefcb9e0b 6f313c531f3e 8455632502ed
	I0906 12:20:46.738443   13103 command_runner.go:130] > 12b00d3e81cd
	I0906 12:20:46.738474   13103 command_runner.go:130] > b8675b45ba97
	I0906 12:20:46.738657   13103 command_runner.go:130] > 0516c7173c76
	I0906 12:20:46.738757   13103 command_runner.go:130] > 6766a97ec06f
	I0906 12:20:46.738837   13103 command_runner.go:130] > b2cede164434
	I0906 12:20:46.738974   13103 command_runner.go:130] > e4605e60128b
	I0906 12:20:46.739000   13103 command_runner.go:130] > 98079ff18be9
	I0906 12:20:46.739061   13103 command_runner.go:130] > 68811f115b6f
	I0906 12:20:46.739156   13103 command_runner.go:130] > 7158af8be341
	I0906 12:20:46.739263   13103 command_runner.go:130] > fde17951087f
	I0906 12:20:46.739379   13103 command_runner.go:130] > 487be703273e
	I0906 12:20:46.739467   13103 command_runner.go:130] > 95c1a9b114b1
	I0906 12:20:46.739588   13103 command_runner.go:130] > 03508ab110f1
	I0906 12:20:46.739640   13103 command_runner.go:130] > 8b8fefcb9e0b
	I0906 12:20:46.739757   13103 command_runner.go:130] > 6f313c531f3e
	I0906 12:20:46.739869   13103 command_runner.go:130] > 8455632502ed
	I0906 12:20:46.740823   13103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 12:20:46.753311   13103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 12:20:46.762059   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0906 12:20:46.762071   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0906 12:20:46.762077   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0906 12:20:46.762083   13103 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:20:46.762204   13103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 12:20:46.762210   13103 kubeadm.go:157] found existing configuration files:
	
	I0906 12:20:46.762252   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 12:20:46.769254   13103 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:20:46.769280   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 12:20:46.769328   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 12:20:46.776572   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 12:20:46.783758   13103 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:20:46.783776   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 12:20:46.783811   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 12:20:46.791113   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 12:20:46.798161   13103 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:20:46.798183   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 12:20:46.798220   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 12:20:46.805713   13103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 12:20:46.812921   13103 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:20:46.812949   13103 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 12:20:46.812990   13103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 12:20:46.820390   13103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 12:20:46.827763   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:46.898290   13103 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 12:20:46.898453   13103 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0906 12:20:46.898625   13103 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0906 12:20:46.898765   13103 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 12:20:46.898960   13103 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0906 12:20:46.899098   13103 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0906 12:20:46.899397   13103 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0906 12:20:46.899561   13103 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0906 12:20:46.899681   13103 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0906 12:20:46.899817   13103 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 12:20:46.899989   13103 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 12:20:46.900143   13103 command_runner.go:130] > [certs] Using the existing "sa" key
	I0906 12:20:46.900985   13103 command_runner.go:130] ! W0906 19:20:47.031470    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:46.901004   13103 command_runner.go:130] ! W0906 19:20:47.032174    1325 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:46.901041   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:46.935711   13103 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 12:20:47.096680   13103 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 12:20:47.204439   13103 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 12:20:47.365845   13103 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 12:20:47.451527   13103 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 12:20:47.525150   13103 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 12:20:47.527254   13103 command_runner.go:130] ! W0906 19:20:47.069183    1330 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.527272   13103 command_runner.go:130] ! W0906 19:20:47.069676    1330 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.527286   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.576279   13103 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 12:20:47.581148   13103 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 12:20:47.581159   13103 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0906 12:20:47.689821   13103 command_runner.go:130] ! W0906 19:20:47.697610    1335 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.689851   13103 command_runner.go:130] ! W0906 19:20:47.698106    1335 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.689868   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.746190   13103 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 12:20:47.746600   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 12:20:47.748596   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 12:20:47.749246   13103 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 12:20:47.750702   13103 command_runner.go:130] ! W0906 19:20:47.870242    1362 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.750732   13103 command_runner.go:130] ! W0906 19:20:47.871098    1362 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.750753   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:47.814153   13103 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 12:20:47.826523   13103 command_runner.go:130] ! W0906 19:20:47.947508    1370 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.826546   13103 command_runner.go:130] ! W0906 19:20:47.947979    1370 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:47.826615   13103 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:20:47.826675   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.327215   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.827064   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:20:48.840074   13103 command_runner.go:130] > 1692
	I0906 12:20:48.840096   13103 api_server.go:72] duration metric: took 1.013496031s to wait for apiserver process to appear ...
	I0906 12:20:48.840102   13103 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:20:48.840118   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.026473   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:20:51.026490   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:20:51.026497   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.054937   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 12:20:51.054956   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 12:20:51.341860   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.346791   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 12:20:51.346809   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 12:20:51.841712   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:51.847377   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 12:20:51.847398   13103 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 12:20:52.341716   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:20:52.345528   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:20:52.345592   13103 round_trippers.go:463] GET https://192.169.0.33:8443/version
	I0906 12:20:52.345598   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:52.345606   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:52.345609   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:52.352319   13103 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 12:20:52.352332   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:52.352337   13103 round_trippers.go:580]     Audit-Id: 5ffc807c-a78c-402c-87d3-b9b415b40e5f
	I0906 12:20:52.352340   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:52.352350   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:52.352354   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:52.352356   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:52.352359   13103 round_trippers.go:580]     Content-Length: 263
	I0906 12:20:52.352363   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:52 GMT
	I0906 12:20:52.352382   13103 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 12:20:52.352432   13103 api_server.go:141] control plane version: v1.31.0
	I0906 12:20:52.352443   13103 api_server.go:131] duration metric: took 3.512352698s to wait for apiserver health ...
	I0906 12:20:52.352449   13103 cni.go:84] Creating CNI manager for ""
	I0906 12:20:52.352452   13103 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 12:20:52.374855   13103 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 12:20:52.395566   13103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 12:20:52.402927   13103 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0906 12:20:52.402941   13103 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0906 12:20:52.402950   13103 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0906 12:20:52.402955   13103 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 12:20:52.402959   13103 command_runner.go:130] > Access: 2024-09-06 19:20:14.852309625 +0000
	I0906 12:20:52.402966   13103 command_runner.go:130] > Modify: 2024-09-03 22:42:55.000000000 +0000
	I0906 12:20:52.402971   13103 command_runner.go:130] > Change: 2024-09-06 19:20:13.268309735 +0000
	I0906 12:20:52.402978   13103 command_runner.go:130] >  Birth: -
	I0906 12:20:52.405546   13103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0906 12:20:52.405555   13103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0906 12:20:52.439971   13103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 12:20:52.805772   13103 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0906 12:20:52.854248   13103 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0906 12:20:52.933352   13103 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0906 12:20:53.005604   13103 command_runner.go:130] > daemonset.apps/kindnet configured
	I0906 12:20:53.007357   13103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:20:53.007404   13103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 12:20:53.007414   13103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 12:20:53.007474   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:20:53.007480   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.007486   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.007490   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.009554   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.009563   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.009569   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.009572   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.009575   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.009579   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.009591   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.009594   13103 round_trippers.go:580]     Audit-Id: 55484294-9cbd-46c9-bee1-1b642c12b69d
	I0906 12:20:53.010487   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"849"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0906 12:20:53.013723   13103 system_pods.go:59] 12 kube-system pods found
	I0906 12:20:53.013738   13103 system_pods.go:61] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 12:20:53.013744   13103 system_pods.go:61] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 12:20:53.013748   13103 system_pods.go:61] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:20:53.013756   13103 system_pods.go:61] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:20:53.013760   13103 system_pods.go:61] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:20:53.013767   13103 system_pods.go:61] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 12:20:53.013771   13103 system_pods.go:61] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 12:20:53.013776   13103 system_pods.go:61] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:20:53.013780   13103 system_pods.go:61] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:20:53.013783   13103 system_pods.go:61] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:20:53.013786   13103 system_pods.go:61] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 12:20:53.013790   13103 system_pods.go:61] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running
	I0906 12:20:53.013794   13103 system_pods.go:74] duration metric: took 6.429185ms to wait for pod list to return data ...
	I0906 12:20:53.013800   13103 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:20:53.013833   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes
	I0906 12:20:53.013837   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.013843   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.013846   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.015478   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.015502   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.015511   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.015514   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.015517   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.015520   13103 round_trippers.go:580]     Audit-Id: 30570eec-545b-4745-8743-a1cab2a3fb29
	I0906 12:20:53.015523   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.015525   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.015644   13103 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"849"},"items":[{"metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14782 chars]
	I0906 12:20:53.016196   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016209   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016218   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016221   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016225   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:20:53.016229   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:20:53.016233   13103 node_conditions.go:105] duration metric: took 2.429093ms to run NodePressure ...
	I0906 12:20:53.016243   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 12:20:53.160252   13103 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0906 12:20:53.282226   13103 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0906 12:20:53.283414   13103 command_runner.go:130] ! W0906 19:20:53.201637    2133 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:53.283436   13103 command_runner.go:130] ! W0906 19:20:53.202191    2133 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 12:20:53.283454   13103 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 12:20:53.283521   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0906 12:20:53.283525   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.283530   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.283534   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.285658   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.285667   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.285674   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.285678   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.285683   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.285688   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.285692   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.285695   13103 round_trippers.go:580]     Audit-Id: dfd8d4ba-250d-43fd-a3c9-7094cfa9b329
	I0906 12:20:53.286150   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"820","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 31218 chars]
	I0906 12:20:53.286880   13103 kubeadm.go:739] kubelet initialised
	I0906 12:20:53.286889   13103 kubeadm.go:740] duration metric: took 3.428745ms waiting for restarted kubelet to initialise ...
	I0906 12:20:53.286897   13103 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:20:53.286928   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:20:53.286933   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.286939   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.286944   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.289064   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.289072   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.289076   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.289080   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.289082   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.289085   13103 round_trippers.go:580]     Audit-Id: f185bee8-cf54-428e-9251-f89670109af4
	I0906 12:20:53.289088   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.289091   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.290451   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"851"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89937 chars]
	I0906 12:20:53.293407   13103 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.293459   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:20:53.293464   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.293470   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.293475   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.295326   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.295335   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.295339   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.295342   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.295345   13103 round_trippers.go:580]     Audit-Id: 83fb4e68-22fb-4080-a855-59e8a5c87034
	I0906 12:20:53.295348   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.295350   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.295353   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.295454   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:20:53.295719   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.295727   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.295733   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.295737   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.297662   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.297677   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.297685   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.297688   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.297691   13103 round_trippers.go:580]     Audit-Id: e5df61ad-c106-47e5-bbc3-4070002c5b9e
	I0906 12:20:53.297694   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.297697   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.297699   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.297927   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.298135   13103 pod_ready.go:98] node "multinode-459000" hosting pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.298146   13103 pod_ready.go:82] duration metric: took 4.727596ms for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.298153   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.298161   13103 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.298194   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-459000
	I0906 12:20:53.298199   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.298205   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.298209   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.299621   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.299629   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.299635   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.299638   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.299642   13103 round_trippers.go:580]     Audit-Id: be77759d-114f-4c80-a5d1-184591aa7427
	I0906 12:20:53.299645   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.299648   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.299650   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.299898   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"820","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6887 chars]
	I0906 12:20:53.300165   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.300172   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.300178   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.300181   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.302558   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.302567   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.302573   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.302576   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.302579   13103 round_trippers.go:580]     Audit-Id: 978a43f1-4d45-4094-ad01-bc549f492e2e
	I0906 12:20:53.302582   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.302586   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.302589   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.302801   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.302977   13103 pod_ready.go:98] node "multinode-459000" hosting pod "etcd-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.302989   13103 pod_ready.go:82] duration metric: took 4.821114ms for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.302995   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "etcd-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.303006   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.303035   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-459000
	I0906 12:20:53.303040   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.303045   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.303049   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.304725   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.304734   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.304739   13103 round_trippers.go:580]     Audit-Id: 744b1630-f218-49d7-bf9e-0874b8ae067c
	I0906 12:20:53.304749   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.304757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.304762   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.304765   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.304768   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.305009   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-459000","namespace":"kube-system","uid":"a7ee0531-75a6-405c-928c-1185a0e5ebd0","resourceVersion":"817","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.33:8443","kubernetes.io/config.hash":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.mirror":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.seen":"2024-09-06T19:16:52.157527221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8135 chars]
	I0906 12:20:53.305246   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.305252   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.305260   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.305264   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.306599   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.306606   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.306611   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.306614   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.306617   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.306621   13103 round_trippers.go:580]     Audit-Id: 9726eabf-d52a-40b0-a363-c7385d06aab6
	I0906 12:20:53.306623   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.306625   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.306860   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.307038   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-apiserver-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.307048   13103 pod_ready.go:82] duration metric: took 4.037219ms for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.307054   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-apiserver-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.307059   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.307089   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-459000
	I0906 12:20:53.307094   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.307099   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.307103   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.308747   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.308756   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.308763   13103 round_trippers.go:580]     Audit-Id: 9a50b907-1158-4251-97c3-8744af1d441b
	I0906 12:20:53.308797   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.308802   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.308806   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.308810   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.308812   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.308934   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-459000","namespace":"kube-system","uid":"ef9a4034-636f-4d52-b328-40aff0e03ccb","resourceVersion":"818","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.mirror":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.seen":"2024-09-06T19:16:52.157528036Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7726 chars]
	I0906 12:20:53.409587   13103 request.go:632] Waited for 100.344038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.409636   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:53.409642   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.409649   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.409678   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.411918   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:53.411930   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.411938   13103 round_trippers.go:580]     Audit-Id: 36f8d9a0-08c1-4900-a883-c98118ddb954
	I0906 12:20:53.411943   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.411948   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.411951   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.411976   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.411984   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.412084   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:53.412281   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-controller-manager-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.412293   13103 pod_ready.go:82] duration metric: took 105.228203ms for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:53.412300   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-controller-manager-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:53.412305   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.609009   13103 request.go:632] Waited for 196.662551ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:20:53.609093   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:20:53.609102   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.609109   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.609117   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.610900   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.610911   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.610918   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.610924   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.610933   13103 round_trippers.go:580]     Audit-Id: 8f5d6aad-4ab1-48b5-889e-18c35f8c2f26
	I0906 12:20:53.610936   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.610940   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.610944   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.611070   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crzpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"253c78d8-0d56-49e8-a00c-99218c50beac","resourceVersion":"505","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:20:53.809000   13103 request.go:632] Waited for 197.654908ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:20:53.809067   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:20:53.809076   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:53.809084   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:53.809090   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:53.810657   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:53.810685   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:53.810691   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:53.810694   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:53.810697   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:53.810700   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:53.810704   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:53 GMT
	I0906 12:20:53.810706   13103 round_trippers.go:580]     Audit-Id: 5e585264-4859-4285-aeec-7287183c8596
	I0906 12:20:53.810806   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m02","uid":"42483c05-2f0a-48b5-a783-4c5958284f86","resourceVersion":"573","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_17_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3818 chars]
	I0906 12:20:53.810982   13103 pod_ready.go:93] pod "kube-proxy-crzpl" in "kube-system" namespace has status "Ready":"True"
	I0906 12:20:53.810990   13103 pod_ready.go:82] duration metric: took 398.681997ms for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:53.810997   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.009014   13103 request.go:632] Waited for 197.982629ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:20:54.009087   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:20:54.009094   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.009120   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.009127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.010937   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.010949   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.010956   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.010962   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.010969   13103 round_trippers.go:580]     Audit-Id: 16e1e167-aa04-4560-aac6-3565f9b98f3d
	I0906 12:20:54.010975   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.010978   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.010980   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.011063   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t24bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"626397be-3b5a-4dd4-8932-283e8edb0d27","resourceVersion":"849","creationTimestamp":"2024-09-06T19:16:56Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0906 12:20:54.209036   13103 request.go:632] Waited for 197.706507ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:54.209076   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:54.209082   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.209116   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.209123   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.210677   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.210689   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.210699   13103 round_trippers.go:580]     Audit-Id: 84f2f37b-1511-4669-aa06-cc83e829c4c3
	I0906 12:20:54.210707   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.210716   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.210722   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.210730   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.210734   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.210986   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"776","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5292 chars]
	I0906 12:20:54.211173   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-proxy-t24bs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:54.211182   13103 pod_ready.go:82] duration metric: took 400.183556ms for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:54.211191   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-proxy-t24bs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:54.211199   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.409019   13103 request.go:632] Waited for 197.785012ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:20:54.409077   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:20:54.409083   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.409089   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.409093   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.410708   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.410718   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.410723   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.410726   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.410729   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.410733   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.410735   13103 round_trippers.go:580]     Audit-Id: 9bdf799c-01ec-497a-9877-acc5ee1c1400
	I0906 12:20:54.410738   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.410823   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vqcpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6","resourceVersion":"735","creationTimestamp":"2024-09-06T19:18:30Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:20:54.607514   13103 request.go:632] Waited for 196.41514ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:20:54.607567   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:20:54.607574   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.607581   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.607587   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.609573   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:54.609582   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.609587   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.609598   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.609601   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.609604   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.609606   13103 round_trippers.go:580]     Audit-Id: bc2698b5-26ee-4b75-8329-688459bdcba8
	I0906 12:20:54.609613   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.609723   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m03","uid":"6c54d256-cf96-4ec0-9d0b-36c85c77ef2b","resourceVersion":"760","creationTimestamp":"2024-09-06T19:19:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_19_25_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:19:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3635 chars]
	I0906 12:20:54.609895   13103 pod_ready.go:93] pod "kube-proxy-vqcpj" in "kube-system" namespace has status "Ready":"True"
	I0906 12:20:54.609903   13103 pod_ready.go:82] duration metric: took 398.702285ms for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.609909   13103 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:20:54.809054   13103 request.go:632] Waited for 199.102039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:20:54.809116   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:20:54.809123   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:54.809130   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:54.809135   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:54.811199   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:54.811208   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:54.811213   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:54.811217   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:54 GMT
	I0906 12:20:54.811220   13103 round_trippers.go:580]     Audit-Id: 14d96cdd-752b-4f32-81b5-946d2a4fb9c9
	I0906 12:20:54.811222   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:54.811226   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:54.811232   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:54.811498   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-459000","namespace":"kube-system","uid":"4602221a-c2e8-4f7d-a31e-2910196cb32b","resourceVersion":"819","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.mirror":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.seen":"2024-09-06T19:16:46.929338017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5438 chars]
	I0906 12:20:55.009522   13103 request.go:632] Waited for 197.762294ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.009571   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.009578   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.009584   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.009588   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.011014   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:55.011021   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.011025   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.011031   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.011033   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.011038   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.011041   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.011044   13103 round_trippers.go:580]     Audit-Id: 74dc264e-7739-4a48-972c-506fbb05ade8
	I0906 12:20:55.011131   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:55.011329   13103 pod_ready.go:98] node "multinode-459000" hosting pod "kube-scheduler-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:55.011339   13103 pod_ready.go:82] duration metric: took 401.42623ms for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	E0906 12:20:55.011345   13103 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-459000" hosting pod "kube-scheduler-multinode-459000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-459000" has status "Ready":"False"
	I0906 12:20:55.011353   13103 pod_ready.go:39] duration metric: took 1.724456804s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:20:55.011367   13103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 12:20:55.022277   13103 command_runner.go:130] > -16
	I0906 12:20:55.022455   13103 ops.go:34] apiserver oom_adj: -16
	I0906 12:20:55.022461   13103 kubeadm.go:597] duration metric: took 8.334425046s to restartPrimaryControlPlane
	I0906 12:20:55.022467   13103 kubeadm.go:394] duration metric: took 8.358925932s to StartCluster
	I0906 12:20:55.022482   13103 settings.go:142] acquiring lock: {Name:mk62b5c013dd2b38ebc53f6ae9cd315d30aadad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:55.022574   13103 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 12:20:55.022988   13103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/kubeconfig: {Name:mk47fb3d49c6ce5c49cc6aa85e06648b92fedba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 12:20:55.023242   13103 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 12:20:55.023269   13103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 12:20:55.023397   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:20:55.046055   13103 out.go:177] * Verifying Kubernetes components...
	I0906 12:20:55.088345   13103 out.go:177] * Enabled addons: 
	I0906 12:20:55.109104   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:20:55.130229   13103 addons.go:510] duration metric: took 106.968501ms for enable addons: enabled=[]
	I0906 12:20:55.271679   13103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 12:20:55.282375   13103 node_ready.go:35] waiting up to 6m0s for node "multinode-459000" to be "Ready" ...
	I0906 12:20:55.282438   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.282444   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.282450   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.282453   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.283922   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:20:55.283933   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.283939   13103 round_trippers.go:580]     Audit-Id: e487ae5e-005a-48d5-b58f-3d58f014af16
	I0906 12:20:55.283945   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.283948   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.283952   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.283955   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.283957   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.284279   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:55.784190   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:55.784216   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:55.784227   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:55.784232   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:55.787081   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:55.787097   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:55.787104   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:55.787116   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:55.787120   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:55.787124   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:55 GMT
	I0906 12:20:55.787128   13103 round_trippers.go:580]     Audit-Id: 613bdd38-a63c-46c4-ad1d-e23b6b4ead50
	I0906 12:20:55.787132   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:55.787225   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:56.283042   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:56.283069   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:56.283081   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:56.283086   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:56.285909   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:56.285924   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:56.285931   13103 round_trippers.go:580]     Audit-Id: 336e04a7-5b77-468d-a980-45a2482d9f8c
	I0906 12:20:56.285935   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:56.285938   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:56.285942   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:56.285946   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:56.285949   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:56 GMT
	I0906 12:20:56.286021   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:56.783350   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:56.783375   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:56.783387   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:56.783394   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:56.786321   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:56.786335   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:56.786342   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:56 GMT
	I0906 12:20:56.786346   13103 round_trippers.go:580]     Audit-Id: 4d25f0ac-c3f5-4f16-98b8-45432f07e35c
	I0906 12:20:56.786350   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:56.786354   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:56.786358   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:56.786361   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:56.786856   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:57.282948   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:57.282975   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:57.282986   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:57.282992   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:57.285671   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:57.285684   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:57.285691   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:57 GMT
	I0906 12:20:57.285695   13103 round_trippers.go:580]     Audit-Id: e05176ca-d4e5-4302-8520-49057bbbad74
	I0906 12:20:57.285699   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:57.285703   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:57.285720   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:57.285733   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:57.285862   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:57.286129   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:20:57.784635   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:57.784663   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:57.784701   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:57.784710   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:57.787321   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:57.787336   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:57.787343   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:57.787348   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:57 GMT
	I0906 12:20:57.787353   13103 round_trippers.go:580]     Audit-Id: 22a92049-2ac6-4f14-a36b-43fdd32ce11f
	I0906 12:20:57.787357   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:57.787363   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:57.787366   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:57.787656   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:58.282909   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:58.282936   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:58.282951   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:58.282957   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:58.285758   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:58.285775   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:58.285783   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:58 GMT
	I0906 12:20:58.285789   13103 round_trippers.go:580]     Audit-Id: d296704e-2819-42cc-ba15-d8774b071678
	I0906 12:20:58.285795   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:58.285801   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:58.285806   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:58.285811   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:58.285911   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:58.782638   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:58.782660   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:58.782696   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:58.782704   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:58.784836   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:58.784849   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:58.784856   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:58 GMT
	I0906 12:20:58.784862   13103 round_trippers.go:580]     Audit-Id: 7471c8ca-95e1-4e27-b818-6a3ee6a94f84
	I0906 12:20:58.784867   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:58.784873   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:58.784875   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:58.784878   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:58.784952   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.283284   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:59.283306   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:59.283315   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:59.283324   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:59.285640   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:59.285651   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:59.285657   13103 round_trippers.go:580]     Audit-Id: b32b984e-803b-45c6-a485-2f6621da8200
	I0906 12:20:59.285659   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:59.285663   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:59.285665   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:59.285669   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:59.285672   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:59 GMT
	I0906 12:20:59.285774   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.783737   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:20:59.783761   13103 round_trippers.go:469] Request Headers:
	I0906 12:20:59.783773   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:20:59.783780   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:20:59.786325   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:20:59.786343   13103 round_trippers.go:577] Response Headers:
	I0906 12:20:59.786351   13103 round_trippers.go:580]     Audit-Id: 9216e10e-3b70-4a91-9a52-a8a339880eb8
	I0906 12:20:59.786357   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:20:59.786360   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:20:59.786364   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:20:59.786367   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:20:59.786374   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:20:59 GMT
	I0906 12:20:59.786769   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:20:59.787020   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:00.283378   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:00.283465   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:00.283478   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:00.283485   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:00.285634   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:00.285646   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:00.285651   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:00.285654   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:00.285661   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:00.285663   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:00.285683   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:00 GMT
	I0906 12:21:00.285688   13103 round_trippers.go:580]     Audit-Id: b74fdec6-ab72-46ec-970e-11133a30eb49
	I0906 12:21:00.285749   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:00.782855   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:00.782871   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:00.782877   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:00.782880   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:00.785063   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:00.785077   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:00.785083   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:00 GMT
	I0906 12:21:00.785086   13103 round_trippers.go:580]     Audit-Id: b8b54770-654c-47ac-bb70-f47239d9a85f
	I0906 12:21:00.785090   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:00.785094   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:00.785097   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:00.785100   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:00.785269   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.283867   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:01.283894   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:01.283904   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:01.283910   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:01.286375   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:01.286388   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:01.286397   13103 round_trippers.go:580]     Audit-Id: a8e07055-17c9-44ef-a99d-9029a0fff2ce
	I0906 12:21:01.286401   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:01.286433   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:01.286441   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:01.286445   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:01.286450   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:01 GMT
	I0906 12:21:01.286643   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.784066   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:01.784089   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:01.784101   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:01.784110   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:01.786790   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:01.786802   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:01.786808   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:01.786810   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:01 GMT
	I0906 12:21:01.786818   13103 round_trippers.go:580]     Audit-Id: 709cfb3e-a937-4f70-b01f-a375a7ecd6d2
	I0906 12:21:01.786822   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:01.786824   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:01.786827   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:01.787030   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:01.787224   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:02.283110   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:02.283218   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:02.283234   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:02.283241   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:02.285929   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:02.285942   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:02.285947   13103 round_trippers.go:580]     Audit-Id: 23e67746-8645-42b6-b246-9ea7bad09da7
	I0906 12:21:02.285950   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:02.285952   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:02.285954   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:02.285957   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:02.285980   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:02 GMT
	I0906 12:21:02.286063   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:02.784562   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:02.784589   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:02.784601   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:02.784607   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:02.787179   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:02.787191   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:02.787196   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:02.787199   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:02 GMT
	I0906 12:21:02.787202   13103 round_trippers.go:580]     Audit-Id: 3138c6d4-06dc-4784-ad86-3d2bf39d9d18
	I0906 12:21:02.787204   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:02.787207   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:02.787210   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:02.787360   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:03.282839   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:03.282867   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:03.282879   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:03.282887   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:03.285832   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:03.285850   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:03.285857   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:03.285865   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:03.285869   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:03.285874   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:03 GMT
	I0906 12:21:03.285878   13103 round_trippers.go:580]     Audit-Id: c405913f-9342-44dc-931f-f8414fcdd19e
	I0906 12:21:03.285882   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:03.285942   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:03.782685   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:03.782706   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:03.782716   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:03.782721   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:03.785444   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:03.785456   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:03.785462   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:03.785465   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:03.785468   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:03.785471   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:03 GMT
	I0906 12:21:03.785473   13103 round_trippers.go:580]     Audit-Id: 5a0d7dbe-5224-44ee-a0df-2ba863732ca1
	I0906 12:21:03.785477   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:03.785734   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:04.282619   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:04.282642   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:04.282654   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:04.282662   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:04.285440   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:04.285454   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:04.285462   13103 round_trippers.go:580]     Audit-Id: 7fa3551b-6c18-4c05-a1f9-feedce2df755
	I0906 12:21:04.285465   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:04.285468   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:04.285472   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:04.285476   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:04.285479   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:04 GMT
	I0906 12:21:04.285554   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:04.285813   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:04.783450   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:04.783471   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:04.783483   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:04.783492   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:04.786538   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:04.786553   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:04.786566   13103 round_trippers.go:580]     Audit-Id: e6e70310-56ea-4d9b-9dfb-50f1853d1c43
	I0906 12:21:04.786572   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:04.786578   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:04.786582   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:04.786587   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:04.786592   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:04 GMT
	I0906 12:21:04.786801   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:05.282653   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:05.282671   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:05.282680   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:05.282687   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:05.285490   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:05.285505   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:05.285512   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:05.285517   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:05.285521   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:05.285526   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:05.285530   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:05 GMT
	I0906 12:21:05.285534   13103 round_trippers.go:580]     Audit-Id: 78254ef8-b353-4eda-8274-53fea1e71827
	I0906 12:21:05.285829   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:05.783384   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:05.783407   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:05.783417   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:05.783422   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:05.786324   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:05.786338   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:05.786346   13103 round_trippers.go:580]     Audit-Id: 7cadcf56-0277-4ba8-b4c6-6b99b793cc5a
	I0906 12:21:05.786350   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:05.786353   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:05.786358   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:05.786362   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:05.786367   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:05 GMT
	I0906 12:21:05.786462   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:06.283633   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:06.283652   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:06.283660   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:06.283665   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:06.286164   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:06.286176   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:06.286181   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:06.286184   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:06 GMT
	I0906 12:21:06.286193   13103 round_trippers.go:580]     Audit-Id: 07623995-247a-4533-b371-d74f13933cf9
	I0906 12:21:06.286197   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:06.286200   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:06.286203   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:06.286261   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:06.286456   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:06.784394   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:06.784416   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:06.784425   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:06.784430   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:06.786848   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:06.786862   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:06.786874   13103 round_trippers.go:580]     Audit-Id: ebdeab18-9907-4e9c-b0af-049ddea0dffa
	I0906 12:21:06.786882   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:06.786890   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:06.786898   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:06.786905   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:06.786910   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:06 GMT
	I0906 12:21:06.787176   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:07.283258   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:07.283286   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:07.283298   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:07.283303   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:07.286283   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:07.286298   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:07.286304   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:07 GMT
	I0906 12:21:07.286309   13103 round_trippers.go:580]     Audit-Id: 198ec8db-095b-4749-936f-50fdaebba154
	I0906 12:21:07.286313   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:07.286318   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:07.286322   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:07.286325   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:07.286389   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:07.782722   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:07.782750   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:07.782762   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:07.782772   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:07.787689   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:07.787701   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:07.787706   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:07.787709   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:07 GMT
	I0906 12:21:07.787712   13103 round_trippers.go:580]     Audit-Id: 43ab80c6-cadf-474a-a628-290349ba4713
	I0906 12:21:07.787733   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:07.787739   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:07.787742   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:07.788169   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:08.284167   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:08.284228   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:08.284239   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:08.284244   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:08.286632   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:08.286645   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:08.286651   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:08.286655   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:08.286658   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:08.286661   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:08 GMT
	I0906 12:21:08.286664   13103 round_trippers.go:580]     Audit-Id: 33efcd67-9d4d-4b18-9e85-046bf5c121f5
	I0906 12:21:08.286666   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:08.286715   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:08.286913   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:08.782461   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:08.782499   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:08.782508   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:08.782513   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:08.784535   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:08.784551   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:08.784563   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:08 GMT
	I0906 12:21:08.784568   13103 round_trippers.go:580]     Audit-Id: 52e2b62a-8c0c-4d1c-8f26-db12cc5752c5
	I0906 12:21:08.784571   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:08.784574   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:08.784578   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:08.784581   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:08.784653   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:09.283882   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:09.283906   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:09.283917   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:09.283925   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:09.286310   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:09.286322   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:09.286330   13103 round_trippers.go:580]     Audit-Id: f8e97681-9a81-4185-96fe-451b96e23c20
	I0906 12:21:09.286333   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:09.286337   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:09.286340   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:09.286364   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:09.286375   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:09 GMT
	I0906 12:21:09.286444   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:09.784111   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:09.784128   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:09.784136   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:09.784153   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:09.785974   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:09.785984   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:09.785994   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:09.786000   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:09.786004   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:09.786008   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:09 GMT
	I0906 12:21:09.786012   13103 round_trippers.go:580]     Audit-Id: 95b4ab98-eaf7-47a9-93be-d4364de7462c
	I0906 12:21:09.786014   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:09.786405   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:10.283812   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:10.283844   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:10.283887   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:10.283896   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:10.286832   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:10.286844   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:10.286850   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:10.286854   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:10 GMT
	I0906 12:21:10.286859   13103 round_trippers.go:580]     Audit-Id: c137c5aa-ce53-40a1-8e3f-d5c95e35f70b
	I0906 12:21:10.286863   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:10.286868   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:10.286872   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:10.287129   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:10.287326   13103 node_ready.go:53] node "multinode-459000" has status "Ready":"False"
	I0906 12:21:10.782721   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:10.782741   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:10.782751   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:10.782764   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:10.785658   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:10.785670   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:10.785687   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:10.785692   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:10.785696   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:10.785700   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:10 GMT
	I0906 12:21:10.785702   13103 round_trippers.go:580]     Audit-Id: 26d17310-f8a6-4ca9-96e2-32b23e99741c
	I0906 12:21:10.785705   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:10.785807   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:11.284555   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.284583   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.284595   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.284603   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.287262   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:11.287276   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.287283   13103 round_trippers.go:580]     Audit-Id: 298419bc-1b89-4009-b333-f9ebaaac792a
	I0906 12:21:11.287287   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.287291   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.287295   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.287299   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.287303   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.287426   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"863","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5508 chars]
	I0906 12:21:11.782776   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.782797   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.782827   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.782834   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.785262   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:11.785274   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.785280   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.785283   13103 round_trippers.go:580]     Audit-Id: 51538514-8dce-4de4-82af-a290dfaf42ba
	I0906 12:21:11.785286   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.785309   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.785316   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.785319   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.785398   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:11.785587   13103 node_ready.go:49] node "multinode-459000" has status "Ready":"True"
	I0906 12:21:11.785600   13103 node_ready.go:38] duration metric: took 16.50328117s for node "multinode-459000" to be "Ready" ...
	I0906 12:21:11.785607   13103 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:21:11.785647   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:11.785653   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.785658   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.785663   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.787289   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:11.787313   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.787322   13103 round_trippers.go:580]     Audit-Id: 2eb240ae-11a6-4539-b244-1f271eb9eb36
	I0906 12:21:11.787326   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.787330   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.787332   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.787336   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.787338   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.787991   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"908"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88963 chars]
	I0906 12:21:11.789896   13103 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:11.789934   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:11.789939   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.789945   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.789949   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.791666   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:11.791678   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.791685   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.791691   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.791694   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.791699   13103 round_trippers.go:580]     Audit-Id: e2e9113d-9ff2-4043-9551-32ea69ce30f1
	I0906 12:21:11.791703   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.791706   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.791821   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:11.792083   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:11.792090   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:11.792095   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:11.792099   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:11.793082   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:11.793091   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:11.793099   13103 round_trippers.go:580]     Audit-Id: af531fa0-d516-472c-b40f-a602285a709a
	I0906 12:21:11.793105   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:11.793110   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:11.793116   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:11.793121   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:11.793126   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:11 GMT
	I0906 12:21:11.793281   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:12.290348   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:12.290372   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.290383   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.290392   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.292907   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:12.292922   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.292929   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.292933   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.292938   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.292941   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.292946   13103 round_trippers.go:580]     Audit-Id: edba7332-6fb8-4802-9aee-2c3c9563ae9c
	I0906 12:21:12.292949   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.293171   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:12.293453   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:12.293460   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.293465   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.293468   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.294506   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.294516   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.294522   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.294527   13103 round_trippers.go:580]     Audit-Id: 15e49219-17f5-4b87-8dc8-8dd484c4cd61
	I0906 12:21:12.294532   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.294537   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.294540   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.294543   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.294732   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:12.790491   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:12.790508   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.790518   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.790523   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.792321   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.792329   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.792334   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.792338   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.792340   13103 round_trippers.go:580]     Audit-Id: 12dc6b9d-7810-4c86-9fc1-81575bbae058
	I0906 12:21:12.792343   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.792346   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.792349   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.792438   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:12.792725   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:12.792732   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:12.792738   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:12.792743   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:12.794016   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:12.794023   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:12.794028   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:12.794031   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:12.794034   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:12.794036   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:12 GMT
	I0906 12:21:12.794039   13103 round_trippers.go:580]     Audit-Id: cf5f613a-ccd9-4db7-9429-f36a136edcb0
	I0906 12:21:12.794043   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:12.794107   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.290999   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:13.291027   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.291039   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.291046   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.294091   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:13.294107   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.294114   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.294119   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.294122   13103 round_trippers.go:580]     Audit-Id: 2914cdba-2b31-4706-b8b6-9fc62d2eb6f8
	I0906 12:21:13.294127   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.294131   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.294136   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.294376   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:13.294791   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:13.294801   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.294809   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.294813   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.296177   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:13.296187   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.296190   13103 round_trippers.go:580]     Audit-Id: c8739fcf-eef5-458d-95ca-2d0ad6c03ca4
	I0906 12:21:13.296194   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.296198   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.296202   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.296206   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.296210   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.296360   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.791389   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:13.791416   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.791428   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.791436   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.794504   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:13.794524   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.794531   13103 round_trippers.go:580]     Audit-Id: 1e0fc598-7606-4704-947f-eff0dfcd612d
	I0906 12:21:13.794536   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.794555   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.794563   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.794567   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.794574   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.794767   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:13.795159   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:13.795169   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:13.795177   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:13.795181   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:13.796593   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:13.796602   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:13.796607   13103 round_trippers.go:580]     Audit-Id: f2819312-997f-4644-981a-c9a96a4b81c4
	I0906 12:21:13.796611   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:13.796613   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:13.796616   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:13.796618   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:13.796621   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:13 GMT
	I0906 12:21:13.796684   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:13.796852   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:14.290091   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:14.290107   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.290116   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.290121   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.292386   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:14.292398   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.292404   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.292408   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.292420   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.292423   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.292426   13103 round_trippers.go:580]     Audit-Id: b216591b-36b4-4ea5-8115-7316edee1389
	I0906 12:21:14.292429   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.292507   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:14.292791   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:14.292798   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.292803   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.292807   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.293808   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:14.293817   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.293824   13103 round_trippers.go:580]     Audit-Id: 3d892497-eaef-4670-a14b-7ad0fc9e3ba4
	I0906 12:21:14.293829   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.293833   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.293836   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.293839   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.293841   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.294121   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"908","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5285 chars]
	I0906 12:21:14.790294   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:14.790336   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.790350   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.790372   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.792990   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:14.793003   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.793011   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.793014   13103 round_trippers.go:580]     Audit-Id: 18d2bb7c-6a82-4cb6-83fb-3ff3f0702de1
	I0906 12:21:14.793018   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.793020   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.793023   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.793057   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.793204   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:14.793496   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:14.793503   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:14.793509   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:14.793512   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:14.794616   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:14.794624   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:14.794628   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:14.794632   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:14.794635   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:14.794637   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:14 GMT
	I0906 12:21:14.794640   13103 round_trippers.go:580]     Audit-Id: fc4d182e-61b8-4501-ac19-a68778dfcb78
	I0906 12:21:14.794643   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:14.794780   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:15.290106   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:15.290127   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.290135   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.290140   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.292663   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:15.292675   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.292680   13103 round_trippers.go:580]     Audit-Id: 103f5efe-9afc-4fd2-a664-63ec6be292a5
	I0906 12:21:15.292683   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.292687   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.292690   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.292692   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.292695   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.292764   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:15.293046   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:15.293053   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.293058   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.293062   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.294226   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:15.294245   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.294254   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.294258   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.294261   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.294264   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.294266   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.294268   13103 round_trippers.go:580]     Audit-Id: eee0a50e-ed8d-4c10-b2cf-e8e447bb8f85
	I0906 12:21:15.294325   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:15.790866   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:15.790888   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.790898   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.790904   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.793667   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:15.793683   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.793689   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.793693   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.793699   13103 round_trippers.go:580]     Audit-Id: 4a951143-6879-4262-b124-530ae44f12b6
	I0906 12:21:15.793703   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.793706   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.793725   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.793900   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:15.794275   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:15.794286   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:15.794293   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:15.794297   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:15.795734   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:15.795744   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:15.795749   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:15.795754   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:15.795758   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:15.795762   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:15 GMT
	I0906 12:21:15.795765   13103 round_trippers.go:580]     Audit-Id: 1d3366c9-abcd-444b-901f-cd8c59b24b0b
	I0906 12:21:15.795767   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:15.795821   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:16.290256   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:16.290275   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.290284   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.290290   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.292642   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:16.292655   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.292660   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.292663   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.292667   13103 round_trippers.go:580]     Audit-Id: d728eba4-78e6-490d-9876-de40ab3d2504
	I0906 12:21:16.292670   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.292674   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.292677   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.292961   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:16.293244   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:16.293252   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.293257   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.293261   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.294276   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:16.294285   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.294290   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.294294   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.294297   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.294300   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.294303   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.294306   13103 round_trippers.go:580]     Audit-Id: cae7b7ec-4833-4094-b8df-dbf19c7d37d2
	I0906 12:21:16.294558   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:16.294728   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:16.791363   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:16.791390   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.791402   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.791408   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.794048   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:16.794060   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.794065   13103 round_trippers.go:580]     Audit-Id: f59eb22e-fd55-4628-b75d-05898d911e96
	I0906 12:21:16.794069   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.794071   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.794075   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.794077   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.794081   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.794151   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:16.794458   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:16.794465   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:16.794470   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:16.794474   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:16.795665   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:16.795672   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:16.795678   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:16.795681   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:16.795685   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:16 GMT
	I0906 12:21:16.795687   13103 round_trippers.go:580]     Audit-Id: ec472ec1-1223-49bc-8f4f-91e810fc4307
	I0906 12:21:16.795690   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:16.795693   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:16.795800   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:17.289973   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:17.289991   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.289997   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.290000   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.291730   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.291752   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.291766   13103 round_trippers.go:580]     Audit-Id: c1fa4522-de4c-4930-9edc-e416768ea52d
	I0906 12:21:17.291786   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.291792   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.291795   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.291798   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.291802   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.291902   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:17.292221   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:17.292228   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.292234   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.292237   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.293364   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.293375   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.293382   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.293386   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.293390   13103 round_trippers.go:580]     Audit-Id: 11f2e1d3-2e85-474d-9b62-a390693faa18
	I0906 12:21:17.293393   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.293395   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.293398   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.293453   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:17.790169   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:17.790185   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.790190   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.790193   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.791789   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.791801   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.791808   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.791814   13103 round_trippers.go:580]     Audit-Id: 9ac97b11-a198-4efb-8efc-0d2cca12e1db
	I0906 12:21:17.791821   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.791827   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.791833   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.791838   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.792162   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:17.792474   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:17.792481   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:17.792487   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:17.792492   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:17.793759   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:17.793771   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:17.793778   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:17.793783   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:17.793788   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:17 GMT
	I0906 12:21:17.793792   13103 round_trippers.go:580]     Audit-Id: 3b1483dc-be8e-438f-bf9b-c9aa98fde328
	I0906 12:21:17.793796   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:17.793800   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:17.793931   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:18.290365   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:18.290394   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.290406   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.290472   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.292778   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:18.292793   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.292798   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.292802   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.292804   13103 round_trippers.go:580]     Audit-Id: 89948fe5-93dd-4262-9047-3782b382d578
	I0906 12:21:18.292807   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.292809   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.292811   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.292878   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:18.293174   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:18.293181   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.293186   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.293189   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.294291   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:18.294299   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.294311   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.294316   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.294318   13103 round_trippers.go:580]     Audit-Id: 7863f14e-37f8-425c-bace-a4f1fd6c881a
	I0906 12:21:18.294322   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.294325   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.294328   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.294974   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:18.295151   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:18.790221   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:18.790246   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.790258   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.790286   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.792634   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:18.792650   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.792658   13103 round_trippers.go:580]     Audit-Id: dff2e427-1f05-4c64-9b5f-a6b13eadb645
	I0906 12:21:18.792662   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.792666   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.792670   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.792676   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.792679   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.792781   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:18.793116   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:18.793145   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:18.793152   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:18.793170   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:18.794612   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:18.794620   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:18.794625   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:18 GMT
	I0906 12:21:18.794628   13103 round_trippers.go:580]     Audit-Id: 1a6da25b-f653-4360-b94d-81192052ff13
	I0906 12:21:18.794632   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:18.794635   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:18.794640   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:18.794643   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:18.794851   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:19.290301   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:19.290337   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.290346   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.290351   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.292530   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:19.292543   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.292548   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.292551   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.292554   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.292558   13103 round_trippers.go:580]     Audit-Id: 2f2fec6e-8368-4ebc-b6ab-4ad12cbf992b
	I0906 12:21:19.292561   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.292564   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.292633   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:19.292932   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:19.292939   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.292944   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.292948   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.294059   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:19.294068   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.294073   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.294077   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.294080   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.294082   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.294085   13103 round_trippers.go:580]     Audit-Id: ce182c3f-f63b-477f-9d1f-903a0e58563f
	I0906 12:21:19.294088   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.294240   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:19.792069   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:19.792088   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.792096   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.792100   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.794256   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:19.794269   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.794274   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.794278   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.794280   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.794282   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.794285   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.794287   13103 round_trippers.go:580]     Audit-Id: 004f896c-9063-4725-b97a-f4adea5fb1c5
	I0906 12:21:19.794494   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:19.794780   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:19.794787   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:19.794793   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:19.794796   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:19.798926   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:19.798937   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:19.798941   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:19.798944   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:19 GMT
	I0906 12:21:19.798947   13103 round_trippers.go:580]     Audit-Id: 4c388aad-97a1-4855-90d9-b470c8d951ee
	I0906 12:21:19.798949   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:19.798951   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:19.798954   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:19.799557   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:20.290622   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:20.290645   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.290657   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.290663   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.293197   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:20.293213   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.293220   13103 round_trippers.go:580]     Audit-Id: fab7e316-5b57-43ad-81ee-16e332f18312
	I0906 12:21:20.293224   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.293228   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.293231   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.293236   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.293241   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.293329   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:20.293699   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:20.293708   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.293716   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.293723   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.294963   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:20.294973   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.294978   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.294985   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.294991   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.294995   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.295000   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.295003   13103 round_trippers.go:580]     Audit-Id: 772e3113-a0aa-49f0-90ea-85d876fbe1f2
	I0906 12:21:20.295067   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:20.295232   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:20.792125   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:20.792146   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.792158   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.792167   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.795233   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:20.795252   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.795260   13103 round_trippers.go:580]     Audit-Id: 615780de-f810-4f27-a16c-ab7c2e73713e
	I0906 12:21:20.795264   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.795269   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.795272   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.795276   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.795280   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.795665   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:20.796042   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:20.796052   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:20.796060   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:20.796081   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:20.797582   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:20.797591   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:20.797597   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:20.797601   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:20.797605   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:20.797608   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:20.797612   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:20 GMT
	I0906 12:21:20.797617   13103 round_trippers.go:580]     Audit-Id: 91eba761-d083-4d31-84b5-7de10ea4f1fa
	I0906 12:21:20.798027   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:21.292039   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:21.292067   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.292079   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.292085   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.295020   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:21.295040   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.295051   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.295059   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.295074   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.295080   13103 round_trippers.go:580]     Audit-Id: da3b9516-e0b0-4030-8bb5-01eecf8f60f0
	I0906 12:21:21.295085   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.295090   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.295205   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:21.295588   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:21.295599   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.295606   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.295610   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.296951   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:21.296959   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.296964   13103 round_trippers.go:580]     Audit-Id: 4f65491b-a85f-457f-b4a6-9957d11b1b92
	I0906 12:21:21.296980   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.296987   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.296990   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.296992   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.296995   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.297061   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:21.790165   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:21.790183   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.790212   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.790223   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.792474   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:21.792486   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.792491   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.792495   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.792498   13103 round_trippers.go:580]     Audit-Id: e76b82d6-7a4c-491d-a0e1-ec55533b249e
	I0906 12:21:21.792501   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.792504   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.792507   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.792692   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:21.792977   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:21.792984   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:21.792989   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:21.792993   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:21.794028   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:21.794035   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:21.794040   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:21 GMT
	I0906 12:21:21.794043   13103 round_trippers.go:580]     Audit-Id: a5239146-5aef-41b1-a558-92ec46d1ec96
	I0906 12:21:21.794046   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:21.794050   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:21.794053   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:21.794056   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:21.794392   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:22.291077   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:22.291095   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.291103   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.291109   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.293678   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:22.293690   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.293698   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.293702   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.293706   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.293712   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.293715   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.293720   13103 round_trippers.go:580]     Audit-Id: 830b9445-9e92-4e00-a756-44b08fd5b00f
	I0906 12:21:22.293841   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:22.294140   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:22.294148   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.294154   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.294157   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.295227   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:22.295237   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.295242   13103 round_trippers.go:580]     Audit-Id: d1cd6776-0a0d-4d08-a619-0d9c0f5c6498
	I0906 12:21:22.295259   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.295265   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.295268   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.295271   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.295275   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.295424   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:22.295600   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:22.792126   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:22.792148   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.792160   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.792166   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.795102   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:22.795118   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.795125   13103 round_trippers.go:580]     Audit-Id: 0d5e174a-14c0-414f-bd28-e23766377584
	I0906 12:21:22.795129   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.795132   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.795138   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.795144   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.795150   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.795330   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:22.795708   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:22.795718   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:22.795726   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:22.795731   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:22.796977   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:22.796984   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:22.796990   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:22.796995   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:22 GMT
	I0906 12:21:22.796999   13103 round_trippers.go:580]     Audit-Id: c85d75ab-f62a-4f5f-b60d-c6982eb9e60b
	I0906 12:21:22.797002   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:22.797006   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:22.797009   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:22.797298   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:23.292087   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:23.292107   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.292119   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.292127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.294736   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:23.294752   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.294762   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.294768   13103 round_trippers.go:580]     Audit-Id: 844e43cd-1b1e-41c0-937a-9274b6eb3fb9
	I0906 12:21:23.294773   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.294778   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.294785   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.294790   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.295083   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:23.295380   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:23.295388   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.295394   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.295398   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.296737   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:23.296745   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.296749   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.296753   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.296756   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.296759   13103 round_trippers.go:580]     Audit-Id: e7b039bb-92fe-488d-81bd-ffa5a26d96a7
	I0906 12:21:23.296761   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.296764   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.296946   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:23.792162   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:23.792185   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.792197   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.792203   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.796761   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:23.796773   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.796778   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.796782   13103 round_trippers.go:580]     Audit-Id: 8a0d37c4-d5d4-4b34-a3f8-8ed244e5d4fd
	I0906 12:21:23.796785   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.796788   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.796791   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.796793   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.796925   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"823","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7092 chars]
	I0906 12:21:23.797224   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:23.797232   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:23.797238   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:23.797242   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:23.799226   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:23.799235   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:23.799242   13103 round_trippers.go:580]     Audit-Id: 242f9ee1-d98d-4280-808b-a656a2b92498
	I0906 12:21:23.799247   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:23.799251   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:23.799255   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:23.799259   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:23.799269   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:23 GMT
	I0906 12:21:23.799414   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.290082   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:24.290096   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.290102   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.290105   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.291823   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:24.291834   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.291840   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.291843   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.291845   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.291848   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.291851   13103 round_trippers.go:580]     Audit-Id: 6a99ab05-94e2-492b-8af0-b2da0016e5b7
	I0906 12:21:24.291854   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.291926   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:24.292215   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:24.292222   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.292228   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.292232   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.294868   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:24.294879   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.294887   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.294891   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.294895   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.294919   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.294928   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.294942   13103 round_trippers.go:580]     Audit-Id: af0ff70b-fd71-4aeb-b3de-4315f34facb9
	I0906 12:21:24.295045   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.792021   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:24.792045   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.792056   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.792061   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.795099   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:24.795111   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.795118   13103 round_trippers.go:580]     Audit-Id: 49cd14f6-b700-4079-acc6-1c23ea6665a8
	I0906 12:21:24.795121   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.795126   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.795129   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.795134   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.795138   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.795441   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:24.795829   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:24.795839   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:24.795847   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:24.795852   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:24.797049   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:24.797056   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:24.797062   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:24.797067   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:24.797071   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:24.797077   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:24 GMT
	I0906 12:21:24.797081   13103 round_trippers.go:580]     Audit-Id: 372e796a-fd9b-4c7f-a4f3-348e4bb85f78
	I0906 12:21:24.797084   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:24.797314   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:24.797488   13103 pod_ready.go:103] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"False"
	I0906 12:21:25.291260   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:25.291308   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.291321   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.291329   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.293814   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:25.293827   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.293837   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.293845   13103 round_trippers.go:580]     Audit-Id: 0913032e-8be9-417d-bb6c-c5369ea32b94
	I0906 12:21:25.293850   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.293855   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.293858   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.293861   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.294069   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"927","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7268 chars]
	I0906 12:21:25.294442   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.294452   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.294460   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.294472   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.295729   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.295737   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.295742   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.295745   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.295749   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.295752   13103 round_trippers.go:580]     Audit-Id: f6de5899-967e-4335-937d-b862caacaac4
	I0906 12:21:25.295755   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.295757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.295934   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.790082   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-m6cmh
	I0906 12:21:25.790110   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.790121   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.790127   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.794252   13103 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 12:21:25.794265   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.794270   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.794274   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.794276   13103 round_trippers.go:580]     Audit-Id: 84c94fcb-2090-4820-8371-d077f05523ae
	I0906 12:21:25.794279   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.794282   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.794285   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.794608   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 7039 chars]
	I0906 12:21:25.794917   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.794925   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.794930   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.794933   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.797962   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:25.797972   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.797977   13103 round_trippers.go:580]     Audit-Id: 0bad9809-253e-41f5-b043-fd2cc4b28671
	I0906 12:21:25.797981   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.797983   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.797986   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.797988   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.797991   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.798051   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.798227   13103 pod_ready.go:93] pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.798236   13103 pod_ready.go:82] duration metric: took 14.008395399s for pod "coredns-6f6b679f8f-m6cmh" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.798242   13103 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.798273   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-459000
	I0906 12:21:25.798278   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.798283   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.798287   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.799854   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.799863   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.799870   13103 round_trippers.go:580]     Audit-Id: 403b7e40-de6c-49d8-bd8c-3037daef8684
	I0906 12:21:25.799876   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.799887   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.799892   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.799896   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.799899   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.800134   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-459000","namespace":"kube-system","uid":"6b5f5bee-fce4-4d53-addd-8e77fb0c227f","resourceVersion":"896","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.33:2379","kubernetes.io/config.hash":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.mirror":"0526416a66b9624000d20fb65a703981","kubernetes.io/config.seen":"2024-09-06T19:16:46.929340688Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6663 chars]
	I0906 12:21:25.800368   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.800374   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.800379   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.800382   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.801593   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.801602   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.801608   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.801612   13103 round_trippers.go:580]     Audit-Id: cfc71669-8af7-4367-87d9-6662789b2dae
	I0906 12:21:25.801614   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.801617   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.801621   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.801624   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.801765   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.801934   13103 pod_ready.go:93] pod "etcd-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.801942   13103 pod_ready.go:82] duration metric: took 3.694957ms for pod "etcd-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.801952   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.801981   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-459000
	I0906 12:21:25.801986   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.801991   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.801996   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.802919   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.802927   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.802934   13103 round_trippers.go:580]     Audit-Id: 63d71678-c53e-4543-b6a5-d040eec32368
	I0906 12:21:25.802942   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.802946   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.802951   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.802955   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.802960   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.803115   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-459000","namespace":"kube-system","uid":"a7ee0531-75a6-405c-928c-1185a0e5ebd0","resourceVersion":"893","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.33:8443","kubernetes.io/config.hash":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.mirror":"0683da937341551af0076f4edfd39eef","kubernetes.io/config.seen":"2024-09-06T19:16:52.157527221Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7891 chars]
	I0906 12:21:25.803342   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.803349   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.803355   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.803358   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.804246   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.804252   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.804256   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.804259   13103 round_trippers.go:580]     Audit-Id: 0ac897aa-9ea8-4691-969b-24565f1cec79
	I0906 12:21:25.804262   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.804264   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.804267   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.804270   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.804446   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.804600   13103 pod_ready.go:93] pod "kube-apiserver-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.804607   13103 pod_ready.go:82] duration metric: took 2.650187ms for pod "kube-apiserver-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.804617   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.804642   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-459000
	I0906 12:21:25.804646   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.804652   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.804656   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.805698   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:25.805710   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.805719   13103 round_trippers.go:580]     Audit-Id: d2e6c5a5-f0ad-4b3f-bee2-a22972423cd2
	I0906 12:21:25.805725   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.805729   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.805733   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.805738   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.805741   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.805885   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-459000","namespace":"kube-system","uid":"ef9a4034-636f-4d52-b328-40aff0e03ccb","resourceVersion":"882","creationTimestamp":"2024-09-06T19:16:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.mirror":"6c2b324ccb60123ce756873668712c51","kubernetes.io/config.seen":"2024-09-06T19:16:52.157528036Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7464 chars]
	I0906 12:21:25.806107   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:25.806114   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.806120   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.806124   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.807056   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.807066   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.807072   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.807075   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.807089   13103 round_trippers.go:580]     Audit-Id: 035907c2-91f4-4135-ba77-18e01d4e93aa
	I0906 12:21:25.807095   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.807098   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.807100   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.807202   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:25.807359   13103 pod_ready.go:93] pod "kube-controller-manager-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.807367   13103 pod_ready.go:82] duration metric: took 2.745265ms for pod "kube-controller-manager-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.807373   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.807399   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crzpl
	I0906 12:21:25.807404   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.807410   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.807414   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.808305   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.808312   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.808316   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.808320   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.808323   13103 round_trippers.go:580]     Audit-Id: e1b9f127-6568-4885-a928-a313180b5cfc
	I0906 12:21:25.808326   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.808330   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.808333   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.808489   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crzpl","generateName":"kube-proxy-","namespace":"kube-system","uid":"253c78d8-0d56-49e8-a00c-99218c50beac","resourceVersion":"505","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:21:25.808732   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m02
	I0906 12:21:25.808739   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.808746   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.808749   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.809591   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:25.809599   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.809603   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.809608   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.809611   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.809613   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:25 GMT
	I0906 12:21:25.809616   13103 round_trippers.go:580]     Audit-Id: 12243566-34b1-46b1-8e77-91a9e8c62dc1
	I0906 12:21:25.809625   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.809714   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m02","uid":"42483c05-2f0a-48b5-a783-4c5958284f86","resourceVersion":"573","creationTimestamp":"2024-09-06T19:17:40Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_17_40_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:17:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3818 chars]
	I0906 12:21:25.809852   13103 pod_ready.go:93] pod "kube-proxy-crzpl" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:25.809859   13103 pod_ready.go:82] duration metric: took 2.481658ms for pod "kube-proxy-crzpl" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.809864   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:25.990651   13103 request.go:632] Waited for 180.750264ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:21:25.990733   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t24bs
	I0906 12:21:25.990745   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:25.990758   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:25.990766   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:25.993181   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:25.993195   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:25.993203   13103 round_trippers.go:580]     Audit-Id: c00f3d33-4b56-4ae6-a5e8-81c5026b67c8
	I0906 12:21:25.993206   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:25.993211   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:25.993214   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:25.993219   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:25.993223   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:25.993303   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t24bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"626397be-3b5a-4dd4-8932-283e8edb0d27","resourceVersion":"878","creationTimestamp":"2024-09-06T19:16:56Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6394 chars]
	I0906 12:21:26.191274   13103 request.go:632] Waited for 197.606599ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.191412   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.191429   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.191441   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.191450   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.194207   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.194222   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.194229   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.194233   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.194237   13103 round_trippers.go:580]     Audit-Id: 9fb32ea2-3741-4f49-bb50-a2d213c3ba43
	I0906 12:21:26.194241   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.194245   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.194250   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.194413   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:26.194661   13103 pod_ready.go:93] pod "kube-proxy-t24bs" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.194674   13103 pod_ready.go:82] duration metric: took 384.805332ms for pod "kube-proxy-t24bs" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.194683   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.390345   13103 request.go:632] Waited for 195.620855ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:21:26.390407   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vqcpj
	I0906 12:21:26.390414   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.390423   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.390449   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.392604   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.392621   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.392646   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.392655   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.392658   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.392660   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.392665   13103 round_trippers.go:580]     Audit-Id: 270f284c-7338-4efa-b17a-1a10c014da62
	I0906 12:21:26.392667   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.392768   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vqcpj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6","resourceVersion":"735","creationTimestamp":"2024-09-06T19:18:30Z","labels":{"controller-revision-hash":"5976bc5f75","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"12600189-2d69-40c5-a6b5-21128780ce8f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"12600189-2d69-40c5-a6b5-21128780ce8f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6197 chars]
	I0906 12:21:26.591656   13103 request.go:632] Waited for 198.580484ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:21:26.591735   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000-m03
	I0906 12:21:26.591747   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.591759   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.591766   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.594204   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.594217   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.594223   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.594227   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.594230   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.594232   13103 round_trippers.go:580]     Audit-Id: 2f715b53-17f6-46aa-a414-cdfa14512543
	I0906 12:21:26.594235   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.594238   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.594320   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000-m03","uid":"6c54d256-cf96-4ec0-9d0b-36c85c77ef2b","resourceVersion":"760","creationTimestamp":"2024-09-06T19:19:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_06T12_19_25_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:19:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3635 chars]
	I0906 12:21:26.594506   13103 pod_ready.go:93] pod "kube-proxy-vqcpj" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.594515   13103 pod_ready.go:82] duration metric: took 399.828258ms for pod "kube-proxy-vqcpj" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.594522   13103 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.792106   13103 request.go:632] Waited for 197.521385ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:21:26.792145   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-459000
	I0906 12:21:26.792151   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.792159   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.792164   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.794274   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.794287   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.794292   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.794295   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.794297   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:26 GMT
	I0906 12:21:26.794300   13103 round_trippers.go:580]     Audit-Id: a0b7ecef-315a-4ee1-b32f-542e84989097
	I0906 12:21:26.794310   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.794325   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.794421   13103 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-459000","namespace":"kube-system","uid":"4602221a-c2e8-4f7d-a31e-2910196cb32b","resourceVersion":"887","creationTimestamp":"2024-09-06T19:16:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.mirror":"fd306228ad8a16f01a60f4a1761ce579","kubernetes.io/config.seen":"2024-09-06T19:16:46.929338017Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5194 chars]
	I0906 12:21:26.990594   13103 request.go:632] Waited for 195.896372ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.990633   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes/multinode-459000
	I0906 12:21:26.990639   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:26.990649   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:26.990656   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:26.992802   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:26.992815   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:26.992820   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:26.992824   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:26.992827   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:26.992829   13103 round_trippers.go:580]     Audit-Id: 4800a535-951a-44f6-b035-5009b5db7c8d
	I0906 12:21:26.992832   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:26.992836   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:26.993044   13103 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-06T19:16:49Z","fieldsType":"FieldsV1","fi [truncated 5165 chars]
	I0906 12:21:26.993239   13103 pod_ready.go:93] pod "kube-scheduler-multinode-459000" in "kube-system" namespace has status "Ready":"True"
	I0906 12:21:26.993248   13103 pod_ready.go:82] duration metric: took 398.723382ms for pod "kube-scheduler-multinode-459000" in "kube-system" namespace to be "Ready" ...
	I0906 12:21:26.993255   13103 pod_ready.go:39] duration metric: took 15.207711162s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 12:21:26.993267   13103 api_server.go:52] waiting for apiserver process to appear ...
	I0906 12:21:26.993321   13103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:21:27.005121   13103 command_runner.go:130] > 1692
	I0906 12:21:27.005342   13103 api_server.go:72] duration metric: took 31.982233194s to wait for apiserver process to appear ...
	I0906 12:21:27.005350   13103 api_server.go:88] waiting for apiserver healthz status ...
	I0906 12:21:27.005359   13103 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:21:27.008362   13103 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:21:27.008393   13103 round_trippers.go:463] GET https://192.169.0.33:8443/version
	I0906 12:21:27.008397   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.008403   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.008406   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.008898   13103 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 12:21:27.008905   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.008910   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.008915   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.008919   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.008922   13103 round_trippers.go:580]     Content-Length: 263
	I0906 12:21:27.008927   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.008941   13103 round_trippers.go:580]     Audit-Id: afc79679-e8c5-4a0a-b383-34d3dd5cf866
	I0906 12:21:27.008945   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.008953   13103 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.0",
	  "gitCommit": "9edcffcde5595e8a5b1a35f88c421764e575afce",
	  "gitTreeState": "clean",
	  "buildDate": "2024-08-13T07:28:49Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0906 12:21:27.008973   13103 api_server.go:141] control plane version: v1.31.0
	I0906 12:21:27.008981   13103 api_server.go:131] duration metric: took 3.627345ms to wait for apiserver health ...
	I0906 12:21:27.008986   13103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 12:21:27.192136   13103 request.go:632] Waited for 183.091553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.192271   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.192278   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.192286   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.192292   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.195706   13103 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 12:21:27.195721   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.195729   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.195733   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.195738   13103 round_trippers.go:580]     Audit-Id: af203598-9027-4608-960f-5efe9b85e522
	I0906 12:21:27.195741   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.195757   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.195761   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.196938   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89323 chars]
	I0906 12:21:27.198936   13103 system_pods.go:59] 12 kube-system pods found
	I0906 12:21:27.198946   13103 system_pods.go:61] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running
	I0906 12:21:27.198950   13103 system_pods.go:61] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running
	I0906 12:21:27.198953   13103 system_pods.go:61] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:21:27.198957   13103 system_pods.go:61] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:21:27.198959   13103 system_pods.go:61] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:21:27.198962   13103 system_pods.go:61] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running
	I0906 12:21:27.198968   13103 system_pods.go:61] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running
	I0906 12:21:27.198970   13103 system_pods.go:61] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:21:27.198973   13103 system_pods.go:61] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:21:27.198975   13103 system_pods.go:61] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:21:27.198978   13103 system_pods.go:61] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running
	I0906 12:21:27.198982   13103 system_pods.go:61] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:21:27.198989   13103 system_pods.go:74] duration metric: took 189.999782ms to wait for pod list to return data ...
	I0906 12:21:27.198995   13103 default_sa.go:34] waiting for default service account to be created ...
	I0906 12:21:27.390207   13103 request.go:632] Waited for 191.164821ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:21:27.390245   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/default/serviceaccounts
	I0906 12:21:27.390252   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.390260   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.390264   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.392029   13103 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 12:21:27.392044   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.392049   13103 round_trippers.go:580]     Audit-Id: 2fbbe1f8-a5e2-419a-8fe6-1b6b60c2c579
	I0906 12:21:27.392053   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.392056   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.392058   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.392061   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.392063   13103 round_trippers.go:580]     Content-Length: 261
	I0906 12:21:27.392066   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.392086   13103 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2b97d238-fe0f-46a4-b550-296f608e88e4","resourceVersion":"351","creationTimestamp":"2024-09-06T19:16:57Z"}}]}
	I0906 12:21:27.392202   13103 default_sa.go:45] found service account: "default"
	I0906 12:21:27.392211   13103 default_sa.go:55] duration metric: took 193.2122ms for default service account to be created ...
	I0906 12:21:27.392219   13103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 12:21:27.592153   13103 request.go:632] Waited for 199.860611ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.592227   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/namespaces/kube-system/pods
	I0906 12:21:27.592245   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.592256   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.592265   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.595123   13103 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 12:21:27.595136   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.595143   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.595153   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.595157   13103 round_trippers.go:580]     Audit-Id: bffb0aa4-39bf-41e6-9363-65a6d47aff42
	I0906 12:21:27.595160   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.595164   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.595168   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.596227   13103 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"coredns-6f6b679f8f-m6cmh","generateName":"coredns-6f6b679f8f-","namespace":"kube-system","uid":"ba4177c1-9ec9-4bab-bac7-87474036436d","resourceVersion":"934","creationTimestamp":"2024-09-06T19:16:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6f6b679f8f"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6f6b679f8f","uid":"d75bfe4c-1804-41ec-9196-1f3ffc32fba6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-06T19:16:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d75bfe4c-1804-41ec-9196-1f3ffc32fba6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 89323 chars]
	I0906 12:21:27.598193   13103 system_pods.go:86] 12 kube-system pods found
	I0906 12:21:27.598204   13103 system_pods.go:89] "coredns-6f6b679f8f-m6cmh" [ba4177c1-9ec9-4bab-bac7-87474036436d] Running
	I0906 12:21:27.598208   13103 system_pods.go:89] "etcd-multinode-459000" [6b5f5bee-fce4-4d53-addd-8e77fb0c227f] Running
	I0906 12:21:27.598211   13103 system_pods.go:89] "kindnet-255hz" [a15c2ca1-aea7-4a41-a3f2-fb0620e91614] Running
	I0906 12:21:27.598214   13103 system_pods.go:89] "kindnet-88j6v" [ef7bbbbf-ce02-4b88-b67a-9913447fae59] Running
	I0906 12:21:27.598216   13103 system_pods.go:89] "kindnet-vj8hx" [0168b4a7-dba0-4c33-a101-74257b43ccba] Running
	I0906 12:21:27.598220   13103 system_pods.go:89] "kube-apiserver-multinode-459000" [a7ee0531-75a6-405c-928c-1185a0e5ebd0] Running
	I0906 12:21:27.598224   13103 system_pods.go:89] "kube-controller-manager-multinode-459000" [ef9a4034-636f-4d52-b328-40aff0e03ccb] Running
	I0906 12:21:27.598227   13103 system_pods.go:89] "kube-proxy-crzpl" [253c78d8-0d56-49e8-a00c-99218c50beac] Running
	I0906 12:21:27.598229   13103 system_pods.go:89] "kube-proxy-t24bs" [626397be-3b5a-4dd4-8932-283e8edb0d27] Running
	I0906 12:21:27.598236   13103 system_pods.go:89] "kube-proxy-vqcpj" [b8613e56-ed6a-4e6d-89bc-8d08cbacd7d6] Running
	I0906 12:21:27.598239   13103 system_pods.go:89] "kube-scheduler-multinode-459000" [4602221a-c2e8-4f7d-a31e-2910196cb32b] Running
	I0906 12:21:27.598243   13103 system_pods.go:89] "storage-provisioner" [4e34dcf1-a1c9-464c-9680-a55570fa0319] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 12:21:27.598250   13103 system_pods.go:126] duration metric: took 206.027101ms to wait for k8s-apps to be running ...
	I0906 12:21:27.598262   13103 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 12:21:27.598315   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:21:27.609404   13103 system_svc.go:56] duration metric: took 11.137288ms WaitForService to wait for kubelet
	I0906 12:21:27.609422   13103 kubeadm.go:582] duration metric: took 32.586314845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 12:21:27.609435   13103 node_conditions.go:102] verifying NodePressure condition ...
	I0906 12:21:27.791184   13103 request.go:632] Waited for 181.707048ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.33:8443/api/v1/nodes
	I0906 12:21:27.791256   13103 round_trippers.go:463] GET https://192.169.0.33:8443/api/v1/nodes
	I0906 12:21:27.791270   13103 round_trippers.go:469] Request Headers:
	I0906 12:21:27.791280   13103 round_trippers.go:473]     Accept: application/json, */*
	I0906 12:21:27.791284   13103 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0906 12:21:27.798698   13103 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 12:21:27.798713   13103 round_trippers.go:577] Response Headers:
	I0906 12:21:27.798721   13103 round_trippers.go:580]     Date: Fri, 06 Sep 2024 19:21:27 GMT
	I0906 12:21:27.798725   13103 round_trippers.go:580]     Audit-Id: 367702bd-19ff-4848-9862-dc41de16b578
	I0906 12:21:27.798729   13103 round_trippers.go:580]     Cache-Control: no-cache, private
	I0906 12:21:27.798735   13103 round_trippers.go:580]     Content-Type: application/json
	I0906 12:21:27.798739   13103 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d809a92c-87c8-442a-b8d0-764e4f6e6a1c
	I0906 12:21:27.798744   13103 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 71378957-4de0-419b-bb33-9ef2b2767ead
	I0906 12:21:27.798923   13103 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"938"},"items":[{"metadata":{"name":"multinode-459000","uid":"d09ff043-f56d-4ec7-b523-2039927e6083","resourceVersion":"910","creationTimestamp":"2024-09-06T19:16:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-459000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e6b6435971a63e36b5096cd544634422129cef13","minikube.k8s.io/name":"multinode-459000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_06T12_16_53_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14655 chars]
	I0906 12:21:27.799352   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799364   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799371   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799374   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799377   13103 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 12:21:27.799381   13103 node_conditions.go:123] node cpu capacity is 2
	I0906 12:21:27.799384   13103 node_conditions.go:105] duration metric: took 189.944138ms to run NodePressure ...
	I0906 12:21:27.799392   13103 start.go:241] waiting for startup goroutines ...
	I0906 12:21:27.799399   13103 start.go:246] waiting for cluster config update ...
	I0906 12:21:27.799404   13103 start.go:255] writing updated cluster config ...
	I0906 12:21:27.821252   13103 out.go:201] 
	I0906 12:21:27.843093   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:21:27.843181   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:27.864582   13103 out.go:177] * Starting "multinode-459000-m02" worker node in "multinode-459000" cluster
	I0906 12:21:27.906824   13103 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 12:21:27.906848   13103 cache.go:56] Caching tarball of preloaded images
	I0906 12:21:27.906988   13103 preload.go:172] Found /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0906 12:21:27.907000   13103 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 12:21:27.907095   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:27.907830   13103 start.go:360] acquireMachinesLock for multinode-459000-m02: {Name:mk6774997b000639e1f6cdf83b050bcb024e422a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 12:21:27.907909   13103 start.go:364] duration metric: took 62.547µs to acquireMachinesLock for "multinode-459000-m02"
	I0906 12:21:27.907926   13103 start.go:96] Skipping create...Using existing machine configuration
	I0906 12:21:27.907932   13103 fix.go:54] fixHost starting: m02
	I0906 12:21:27.908283   13103 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:21:27.908299   13103 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:21:27.917825   13103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57515
	I0906 12:21:27.918176   13103 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:21:27.918549   13103 main.go:141] libmachine: Using API Version  1
	I0906 12:21:27.918566   13103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:21:27.918784   13103 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:21:27.918904   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:21:27.918992   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetState
	I0906 12:21:27.919074   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:27.919163   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 12773
	I0906 12:21:27.920087   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 12773 missing from process table
	I0906 12:21:27.920111   13103 fix.go:112] recreateIfNeeded on multinode-459000-m02: state=Stopped err=<nil>
	I0906 12:21:27.920123   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	W0906 12:21:27.920203   13103 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 12:21:27.942601   13103 out.go:177] * Restarting existing hyperkit VM for "multinode-459000-m02" ...
	I0906 12:21:27.984774   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .Start
	I0906 12:21:27.984975   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:27.985004   13103 main.go:141] libmachine: (multinode-459000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid
	I0906 12:21:27.986238   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 12773 missing from process table
	I0906 12:21:27.986246   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | pid 12773 is in state "Stopped"
	I0906 12:21:27.986260   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid...
	I0906 12:21:27.986559   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Using UUID 656fac0c-2257-4452-9309-51b4437053c1
	I0906 12:21:28.010616   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Generated MAC fe:64:cc:9a:2e:14
	I0906 12:21:28.010637   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000
	I0906 12:21:28.010773   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"656fac0c-2257-4452-9309-51b4437053c1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0906 12:21:28.010802   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"656fac0c-2257-4452-9309-51b4437053c1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0906 12:21:28.010862   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "656fac0c-2257-4452-9309-51b4437053c1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/multinode-459000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage,/Users/j
enkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"}
	I0906 12:21:28.010908   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 656fac0c-2257-4452-9309-51b4437053c1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/multinode-459000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/tty,log=/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/bzimage,/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/mult
inode-459000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-459000"
	I0906 12:21:28.010922   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0906 12:21:28.012308   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 DEBUG: hyperkit: Pid is 13138
	I0906 12:21:28.012836   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Attempt 0
	I0906 12:21:28.012847   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:21:28.012954   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 13138
	I0906 12:21:28.014959   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Searching for fe:64:cc:9a:2e:14 in /var/db/dhcpd_leases ...
	I0906 12:21:28.015045   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Found 34 entries in /var/db/dhcpd_leases!
	I0906 12:21:28.015075   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:3a:dc:bb:38:e3:28 ID:1,3a:dc:bb:38:e3:28 Lease:0x66dca76e}
	I0906 12:21:28.015090   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:f2:52:5d:79:cf:f5 ID:1,f2:52:5d:79:cf:f5 Lease:0x66db55d2}
	I0906 12:21:28.015098   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:fe:64:cc:9a:2e:14 ID:1,fe:64:cc:9a:2e:14 Lease:0x66dca6c9}
	I0906 12:21:28.015104   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | Found match: fe:64:cc:9a:2e:14
	I0906 12:21:28.015122   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | IP: 192.169.0.34
	I0906 12:21:28.015211   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetConfigRaw
	I0906 12:21:28.015984   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:21:28.016200   13103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/multinode-459000/config.json ...
	I0906 12:21:28.016694   13103 machine.go:93] provisionDockerMachine start ...
	I0906 12:21:28.016705   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:21:28.016832   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:21:28.016942   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:21:28.017045   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:21:28.017163   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:21:28.017252   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:21:28.017405   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:21:28.017574   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:21:28.017581   13103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 12:21:28.020425   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0906 12:21:28.028631   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0906 12:21:28.029659   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:21:28.029679   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:21:28.029689   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:21:28.029703   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:21:28.418268   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0906 12:21:28.418289   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0906 12:21:28.532958   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0906 12:21:28.532980   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0906 12:21:28.532991   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0906 12:21:28.533007   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0906 12:21:28.533853   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0906 12:21:28.533862   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0906 12:21:34.182409   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0906 12:21:34.182422   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0906 12:21:34.182441   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0906 12:21:34.205614   13103 main.go:141] libmachine: (multinode-459000-m02) DBG | 2024/09/06 12:21:34 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0906 12:22:03.080676   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 12:22:03.080691   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.080823   13103 buildroot.go:166] provisioning hostname "multinode-459000-m02"
	I0906 12:22:03.080835   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.080941   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.081027   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.081123   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.081198   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.081290   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.081435   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.081584   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.081600   13103 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-459000-m02 && echo "multinode-459000-m02" | sudo tee /etc/hostname
	I0906 12:22:03.147432   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-459000-m02
	
	I0906 12:22:03.147447   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.147580   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.147686   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.147777   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.147882   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.148030   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.148181   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.148193   13103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-459000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-459000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-459000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 12:22:03.210956   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 12:22:03.210971   13103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19576-7784/.minikube CaCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19576-7784/.minikube}
	I0906 12:22:03.210983   13103 buildroot.go:174] setting up certificates
	I0906 12:22:03.210989   13103 provision.go:84] configureAuth start
	I0906 12:22:03.210996   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetMachineName
	I0906 12:22:03.211127   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:22:03.211230   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.211317   13103 provision.go:143] copyHostCerts
	I0906 12:22:03.211342   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:22:03.211388   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem, removing ...
	I0906 12:22:03.211393   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem
	I0906 12:22:03.211527   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/key.pem (1675 bytes)
	I0906 12:22:03.211723   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:22:03.211752   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem, removing ...
	I0906 12:22:03.211757   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem
	I0906 12:22:03.211879   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/ca.pem (1078 bytes)
	I0906 12:22:03.212065   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:22:03.212095   13103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem, removing ...
	I0906 12:22:03.212100   13103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem
	I0906 12:22:03.212185   13103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19576-7784/.minikube/cert.pem (1123 bytes)
	I0906 12:22:03.212343   13103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca-key.pem org=jenkins.multinode-459000-m02 san=[127.0.0.1 192.169.0.34 localhost minikube multinode-459000-m02]
	I0906 12:22:03.292544   13103 provision.go:177] copyRemoteCerts
	I0906 12:22:03.292595   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 12:22:03.292609   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.292765   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.292872   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.292982   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.293071   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:03.328230   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 12:22:03.328298   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 12:22:03.348053   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 12:22:03.348131   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0906 12:22:03.367639   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 12:22:03.367712   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 12:22:03.387332   13103 provision.go:87] duration metric: took 176.33502ms to configureAuth
	I0906 12:22:03.387347   13103 buildroot.go:189] setting minikube options for container-runtime
	I0906 12:22:03.387513   13103 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:22:03.387530   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:03.387682   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.387763   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.387851   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.387925   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.388009   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.388123   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.388249   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.388257   13103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 12:22:03.443432   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0906 12:22:03.443443   13103 buildroot.go:70] root file system type: tmpfs
	I0906 12:22:03.443517   13103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 12:22:03.443528   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.443676   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.443804   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.443902   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.443992   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.444119   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.444251   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.444297   13103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.33"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 12:22:03.511777   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.33
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 12:22:03.511796   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:03.511939   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:03.512046   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.512150   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:03.512229   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:03.512369   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:03.512523   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:03.512537   13103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 12:22:05.101095   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0906 12:22:05.101110   13103 machine.go:96] duration metric: took 37.084578612s to provisionDockerMachine
	I0906 12:22:05.101117   13103 start.go:293] postStartSetup for "multinode-459000-m02" (driver="hyperkit")
	I0906 12:22:05.101128   13103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 12:22:05.101143   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.101326   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 12:22:05.101340   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.101444   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.101546   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.101646   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.101727   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.136158   13103 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 12:22:05.139064   13103 command_runner.go:130] > NAME=Buildroot
	I0906 12:22:05.139075   13103 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 12:22:05.139080   13103 command_runner.go:130] > ID=buildroot
	I0906 12:22:05.139085   13103 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 12:22:05.139091   13103 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 12:22:05.139245   13103 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 12:22:05.139254   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/addons for local assets ...
	I0906 12:22:05.139354   13103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19576-7784/.minikube/files for local assets ...
	I0906 12:22:05.139523   13103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> 83642.pem in /etc/ssl/certs
	I0906 12:22:05.139532   13103 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem -> /etc/ssl/certs/83642.pem
	I0906 12:22:05.139729   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 12:22:05.147744   13103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/ssl/certs/83642.pem --> /etc/ssl/certs/83642.pem (1708 bytes)
	I0906 12:22:05.167085   13103 start.go:296] duration metric: took 65.96042ms for postStartSetup
	I0906 12:22:05.167104   13103 fix.go:56] duration metric: took 37.259343707s for fixHost
	I0906 12:22:05.167120   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.167254   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.167358   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.167446   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.167521   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.167651   13103 main.go:141] libmachine: Using SSH client type: native
	I0906 12:22:05.167820   13103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x91c7ea0] 0x91cac00 <nil>  [] 0s} 192.169.0.34 22 <nil> <nil>}
	I0906 12:22:05.167828   13103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 12:22:05.223842   13103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725650525.359228063
	
	I0906 12:22:05.223853   13103 fix.go:216] guest clock: 1725650525.359228063
	I0906 12:22:05.223859   13103 fix.go:229] Guest: 2024-09-06 12:22:05.359228063 -0700 PDT Remote: 2024-09-06 12:22:05.16711 -0700 PDT m=+120.857961279 (delta=192.118063ms)
	I0906 12:22:05.223869   13103 fix.go:200] guest clock delta is within tolerance: 192.118063ms
	I0906 12:22:05.223874   13103 start.go:83] releasing machines lock for "multinode-459000-m02", held for 37.316129214s
	I0906 12:22:05.223892   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.224018   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:22:05.247126   13103 out.go:177] * Found network options:
	I0906 12:22:05.267149   13103 out.go:177]   - NO_PROXY=192.169.0.33
	W0906 12:22:05.288480   13103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:22:05.288517   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289464   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289709   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:22:05.289822   13103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 12:22:05.289870   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	W0906 12:22:05.289953   13103 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 12:22:05.290045   13103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 12:22:05.290049   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.290072   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:22:05.290260   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.290309   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:22:05.290487   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:22:05.290522   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.290612   13103 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:22:05.290641   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.290732   13103 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:22:05.322318   13103 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 12:22:05.322403   13103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 12:22:05.322457   13103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 12:22:05.371219   13103 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 12:22:05.371281   13103 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0906 12:22:05.371302   13103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 12:22:05.371309   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:22:05.371372   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:22:05.386255   13103 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0906 12:22:05.386586   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 12:22:05.395028   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 12:22:05.403351   13103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 12:22:05.403403   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 12:22:05.411931   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:22:05.420232   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 12:22:05.428446   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 12:22:05.436920   13103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 12:22:05.445773   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 12:22:05.453982   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 12:22:05.462364   13103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 12:22:05.470872   13103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 12:22:05.478456   13103 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 12:22:05.478577   13103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 12:22:05.486053   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:22:05.577721   13103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 12:22:05.597370   13103 start.go:495] detecting cgroup driver to use...
	I0906 12:22:05.597442   13103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 12:22:05.616652   13103 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0906 12:22:05.617149   13103 command_runner.go:130] > [Unit]
	I0906 12:22:05.617160   13103 command_runner.go:130] > Description=Docker Application Container Engine
	I0906 12:22:05.617165   13103 command_runner.go:130] > Documentation=https://docs.docker.com
	I0906 12:22:05.617170   13103 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0906 12:22:05.617176   13103 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0906 12:22:05.617186   13103 command_runner.go:130] > StartLimitBurst=3
	I0906 12:22:05.617191   13103 command_runner.go:130] > StartLimitIntervalSec=60
	I0906 12:22:05.617195   13103 command_runner.go:130] > [Service]
	I0906 12:22:05.617202   13103 command_runner.go:130] > Type=notify
	I0906 12:22:05.617206   13103 command_runner.go:130] > Restart=on-failure
	I0906 12:22:05.617209   13103 command_runner.go:130] > Environment=NO_PROXY=192.169.0.33
	I0906 12:22:05.617215   13103 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0906 12:22:05.617224   13103 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0906 12:22:05.617230   13103 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0906 12:22:05.617236   13103 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0906 12:22:05.617242   13103 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0906 12:22:05.617248   13103 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0906 12:22:05.617254   13103 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0906 12:22:05.617263   13103 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0906 12:22:05.617271   13103 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0906 12:22:05.617274   13103 command_runner.go:130] > ExecStart=
	I0906 12:22:05.617286   13103 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0906 12:22:05.617291   13103 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0906 12:22:05.617298   13103 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0906 12:22:05.617304   13103 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0906 12:22:05.617308   13103 command_runner.go:130] > LimitNOFILE=infinity
	I0906 12:22:05.617312   13103 command_runner.go:130] > LimitNPROC=infinity
	I0906 12:22:05.617315   13103 command_runner.go:130] > LimitCORE=infinity
	I0906 12:22:05.617321   13103 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0906 12:22:05.617325   13103 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0906 12:22:05.617329   13103 command_runner.go:130] > TasksMax=infinity
	I0906 12:22:05.617332   13103 command_runner.go:130] > TimeoutStartSec=0
	I0906 12:22:05.617338   13103 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0906 12:22:05.617341   13103 command_runner.go:130] > Delegate=yes
	I0906 12:22:05.617346   13103 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0906 12:22:05.617354   13103 command_runner.go:130] > KillMode=process
	I0906 12:22:05.617358   13103 command_runner.go:130] > [Install]
	I0906 12:22:05.617361   13103 command_runner.go:130] > WantedBy=multi-user.target
	I0906 12:22:05.617421   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:22:05.628871   13103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 12:22:05.647873   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 12:22:05.659524   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:22:05.669927   13103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 12:22:05.694232   13103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 12:22:05.704881   13103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 12:22:05.719722   13103 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0906 12:22:05.719995   13103 ssh_runner.go:195] Run: which cri-dockerd
	I0906 12:22:05.722778   13103 command_runner.go:130] > /usr/bin/cri-dockerd
	I0906 12:22:05.722977   13103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 12:22:05.730138   13103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 12:22:05.743763   13103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 12:22:05.836175   13103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 12:22:05.941964   13103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 12:22:05.941990   13103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 12:22:05.956052   13103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 12:22:06.050692   13103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 12:23:07.093245   13103 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0906 12:23:07.093261   13103 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0906 12:23:07.093271   13103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.026654808s)
	I0906 12:23:07.093333   13103 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0906 12:23:07.102433   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0906 12:23:07.102446   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	I0906 12:23:07.102458   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	I0906 12:23:07.102471   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	I0906 12:23:07.102483   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	I0906 12:23:07.102493   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0906 12:23:07.102506   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0906 12:23:07.102517   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0906 12:23:07.102526   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102536   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102546   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102564   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102577   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102587   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102597   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102606   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102615   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102630   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102641   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0906 12:23:07.102662   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0906 12:23:07.102671   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0906 12:23:07.102680   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0906 12:23:07.102689   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	I0906 12:23:07.102699   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0906 12:23:07.102713   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0906 12:23:07.102723   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0906 12:23:07.102733   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0906 12:23:07.102743   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0906 12:23:07.102753   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0906 12:23:07.102762   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0906 12:23:07.102771   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0906 12:23:07.102780   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0906 12:23:07.102790   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0906 12:23:07.102799   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102811   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102821   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102831   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102842   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102859   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102892   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102903   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0906 12:23:07.102913   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102921   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102930   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102938   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102947   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102956   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102964   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102973   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102982   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.102991   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103001   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103009   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103018   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103027   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0906 12:23:07.103036   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103044   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103053   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0906 12:23:07.103063   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0906 12:23:07.103074   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0906 12:23:07.103088   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0906 12:23:07.103181   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0906 12:23:07.103192   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0906 12:23:07.103203   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0906 12:23:07.103210   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	I0906 12:23:07.103218   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0906 12:23:07.103226   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0906 12:23:07.103234   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0906 12:23:07.103242   13103 command_runner.go:130] > Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	I0906 12:23:07.103250   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0906 12:23:07.103257   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	I0906 12:23:07.103277   13103 command_runner.go:130] > Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0906 12:23:07.103288   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0906 12:23:07.103301   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	I0906 12:23:07.103309   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	I0906 12:23:07.103319   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	I0906 12:23:07.103325   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	I0906 12:23:07.103332   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	I0906 12:23:07.103338   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	I0906 12:23:07.103345   13103 command_runner.go:130] > Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	I0906 12:23:07.103352   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	I0906 12:23:07.103361   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0906 12:23:07.103370   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	I0906 12:23:07.103379   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0906 12:23:07.103415   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0906 12:23:07.103423   13103 command_runner.go:130] > Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0906 12:23:07.103428   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0906 12:23:07.103433   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0906 12:23:07.103439   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0906 12:23:07.103445   13103 command_runner.go:130] > Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	I0906 12:23:07.103453   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0906 12:23:07.103461   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0906 12:23:07.103467   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0906 12:23:07.103473   13103 command_runner.go:130] > Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0906 12:23:07.127876   13103 out.go:201] 
	W0906 12:23:07.148646   13103 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 06 19:22:03 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.391304610Z" level=info msg="Starting up"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392004946Z" level=info msg="containerd not running, starting managed containerd"
	Sep 06 19:22:03 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:03.392654963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=514
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.410081610Z" level=info msg="starting containerd" revision=472731909fa34bd7bc9c087e4c27943f9835f111 version=v1.7.21
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424704285Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424727648Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424763525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424774162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424814976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424848725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.424989631Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425025159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425037295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425045404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425070702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.425145665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426659099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426697531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426805598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426843741Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426872817Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.426890938Z" level=info msg="metadata content store policy set" policy=shared
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428817057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428864164Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428927784Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428940464Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.428949588Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.429051358Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434538379Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434628871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434666891Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434697689Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434728108Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434757897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434791514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434822320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434853529Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434883549Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434912597Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434940545Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.434974771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435007785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435036996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435106915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435139241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435168766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435199068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435228429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435261229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435300063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435332353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435361642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435390212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435421195Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435456060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435486969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435518328Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435600410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435642893Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435672635Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435702100Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435729967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435813148Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.435857835Z" level=info msg="NRI interface is disabled by configuration."
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436104040Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436210486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436350222Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 06 19:22:03 multinode-459000-m02 dockerd[514]: time="2024-09-06T19:22:03.436412176Z" level=info msg="containerd successfully booted in 0.027112s"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.419560925Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.432687700Z" level=info msg="Loading containers: start."
	Sep 06 19:22:04 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:04.537897424Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.166682137Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.209864072Z" level=warning msg="error locating sandbox id 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d: sandbox 697668eff644ee33e51c406d6c935ed298a05104b9a2d54648502150509bfd3d not found"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.210077786Z" level=info msg="Loading containers: done."
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.216995153Z" level=info msg="Docker daemon" commit=3ab5c7d0 containerd-snapshotter=false storage-driver=overlay2 version=27.2.0
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.217101276Z" level=info msg="Daemon has completed initialization"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235153584Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 06 19:22:05 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:05.235304358Z" level=info msg="API listen on [::]:2376"
	Sep 06 19:22:05 multinode-459000-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.198320582Z" level=info msg="Processing signal 'terminated'"
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199273282Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199793722Z" level=info msg="Daemon shutdown complete"
	Sep 06 19:22:06 multinode-459000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.199992866Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 06 19:22:06 multinode-459000-m02 dockerd[507]: time="2024-09-06T19:22:06.200011550Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 06 19:22:07 multinode-459000-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 06 19:22:07 multinode-459000-m02 dockerd[842]: time="2024-09-06T19:22:07.237222595Z" level=info msg="Starting up"
	Sep 06 19:23:07 multinode-459000-m02 dockerd[842]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 06 19:23:07 multinode-459000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0906 12:23:07.148721   13103 out.go:270] * 
	W0906 12:23:07.149793   13103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 12:23:07.211707   13103 out.go:201] 
	
	
	==> Docker <==
	Sep 06 19:21:22 multinode-459000 dockerd[845]: time="2024-09-06T19:21:22.615695116Z" level=info msg="ignoring event" container=015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 19:21:22 multinode-459000 dockerd[851]: time="2024-09-06T19:21:22.615971143Z" level=warning msg="cleaning up after shim disconnected" id=015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2 namespace=moby
	Sep 06 19:21:22 multinode-459000 dockerd[851]: time="2024-09-06T19:21:22.616014209Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.010796214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.010939483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.010959472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.011053858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135527858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135651991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135664327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.135724235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 cri-dockerd[1098]: time="2024-09-06T19:21:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad31325ddc3d5e3ea42101967060f67540a28c4b1f41caca8f16e7b7a3a3c9fd/resolv.conf as [nameserver 192.169.0.1]"
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.241786501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.243664310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.243737195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.243870991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 cri-dockerd[1098]: time="2024-09-06T19:21:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bba495a5518dd208171ccb9db9cceab820b1c2c235c6c1192f0651c43f53c7f7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.356855941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.357093941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.357216765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:24 multinode-459000 dockerd[851]: time="2024-09-06T19:21:24.357512098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.012811668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.012879392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.012892925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 06 19:21:34 multinode-459000 dockerd[851]: time="2024-09-06T19:21:34.013329730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6494a194eedc9       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   8b97e9911f708       storage-provisioner
	6df02b21eb759       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   bba495a5518dd       busybox-7dff88458-b9hnk
	ddc90f5715c82       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   ad31325ddc3d5       coredns-6f6b679f8f-m6cmh
	2dec5851e2896       12968670680f4                                                                                         4 minutes ago       Running             kindnet-cni               1                   2ecc938461a84       kindnet-255hz
	015c097641e0c       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   8b97e9911f708       storage-provisioner
	2788eae2c4b75       ad83b2ca7b09e                                                                                         4 minutes ago       Running             kube-proxy                1                   50bf8760257c4       kube-proxy-t24bs
	d96e2b3df6396       2e96e5913fc06                                                                                         4 minutes ago       Running             etcd                      1                   f3428c6375c14       etcd-multinode-459000
	a58e4533fa0ae       1766f54c897f0                                                                                         4 minutes ago       Running             kube-scheduler            1                   7edc764eee369       kube-scheduler-multinode-459000
	d35fb0e18edb3       604f5db92eaa8                                                                                         4 minutes ago       Running             kube-apiserver            1                   77a0be6b32eea       kube-apiserver-multinode-459000
	f1e4bf2515674       045733566833c                                                                                         4 minutes ago       Running             kube-controller-manager   1                   7bc49d66119d6       kube-controller-manager-multinode-459000
	eaef5d6a6c3c3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 minutes ago       Exited              busybox                   0                   109009d4e6323       busybox-7dff88458-b9hnk
	12b00d3e81cd0       cbb01a7bd410d                                                                                         8 minutes ago       Exited              coredns                   0                   6766a97ec06fd       coredns-6f6b679f8f-m6cmh
	b2cede164434e       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              8 minutes ago       Exited              kindnet-cni               0                   98079ff18be9c       kindnet-255hz
	e4605e60128b4       ad83b2ca7b09e                                                                                         8 minutes ago       Exited              kube-proxy                0                   68811f115b6f5       kube-proxy-t24bs
	7158af8be3418       1766f54c897f0                                                                                         8 minutes ago       Exited              kube-scheduler            0                   8455632502ed7       kube-scheduler-multinode-459000
	fde17951087f9       045733566833c                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   8b8fefcb9e0b2       kube-controller-manager-multinode-459000
	487be703273e5       2e96e5913fc06                                                                                         8 minutes ago       Exited              etcd                      0                   6f313c531f3e2       etcd-multinode-459000
	95c1a9b114b11       604f5db92eaa8                                                                                         8 minutes ago       Exited              kube-apiserver            0                   03508ab110f1b       kube-apiserver-multinode-459000
	
	
	==> coredns [12b00d3e81cd] <==
	[INFO] 10.244.1.2:36981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000043001s
	[INFO] 10.244.1.2:59796 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062383s
	[INFO] 10.244.1.2:50646 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076591s
	[INFO] 10.244.1.2:54430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00006178s
	[INFO] 10.244.1.2:41662 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085328s
	[INFO] 10.244.1.2:51706 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000596695s
	[INFO] 10.244.1.2:52994 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000040808s
	[INFO] 10.244.0.3:39411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007956s
	[INFO] 10.244.0.3:34556 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060317s
	[INFO] 10.244.0.3:60370 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072655s
	[INFO] 10.244.0.3:39210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079178s
	[INFO] 10.244.1.2:55856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100259s
	[INFO] 10.244.1.2:40604 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000064183s
	[INFO] 10.244.1.2:48296 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000042905s
	[INFO] 10.244.1.2:53569 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063922s
	[INFO] 10.244.0.3:41096 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076712s
	[INFO] 10.244.0.3:37573 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000103095s
	[INFO] 10.244.0.3:59516 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071527s
	[INFO] 10.244.0.3:38561 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000066227s
	[INFO] 10.244.1.2:59777 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124892s
	[INFO] 10.244.1.2:46865 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000039395s
	[INFO] 10.244.1.2:35696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000036351s
	[INFO] 10.244.1.2:60341 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000080309s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ddc90f5715c8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58562 - 51028 "HINFO IN 63603369670783559.5709821715024449636. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.012385783s
	
	
	==> describe nodes <==
	Name:               multinode-459000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-459000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-459000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T12_16_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:16:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-459000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:25:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:16:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:16:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:16:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:21:11 +0000   Fri, 06 Sep 2024 19:21:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.33
	  Hostname:    multinode-459000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 59888293c14e47a2952ddf9c971cd2a5
	  System UUID:                01eb4f7c-0000-0000-b53d-2237e8e3c176
	  Boot ID:                    6bf9b2b1-1659-49f1-953a-d0b309ced65e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b9hnk                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 coredns-6f6b679f8f-m6cmh                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m46s
	  kube-system                 etcd-multinode-459000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m52s
	  kube-system                 kindnet-255hz                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m47s
	  kube-system                 kube-apiserver-multinode-459000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 kube-controller-manager-multinode-459000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m51s
	  kube-system                 kube-proxy-t24bs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 kube-scheduler-multinode-459000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m44s                  kube-proxy       
	  Normal  Starting                 4m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m56s (x8 over 8m57s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m56s (x8 over 8m57s)  kubelet          Node multinode-459000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x7 over 8m57s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m51s                  kubelet          Node multinode-459000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m51s                  kubelet          Node multinode-459000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m51s                  kubelet          Node multinode-459000 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m51s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m47s                  node-controller  Node multinode-459000 event: Registered Node multinode-459000 in Controller
	  Normal  NodeReady                8m27s                  kubelet          Node multinode-459000 status is now: NodeReady
	  Normal  Starting                 4m56s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m56s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m56s)  kubelet          Node multinode-459000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m56s)  kubelet          Node multinode-459000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m49s                  node-controller  Node multinode-459000 event: Registered Node multinode-459000 in Controller
	
	
	Name:               multinode-459000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-459000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-459000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T12_17_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:17:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-459000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:19:42 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 19:18:11 +0000   Fri, 06 Sep 2024 19:21:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.34
	  Hostname:    multinode-459000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 88c7641a2b7841348f12d58f0355ab66
	  System UUID:                656f4452-0000-0000-9309-51b4437053c1
	  Boot ID:                    755cc985-7413-4d2f-983a-08afc00f0ddd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-m65s6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m39s
	  kube-system                 kindnet-vj8hx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m3s
	  kube-system                 kube-proxy-crzpl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  8m3s (x2 over 8m4s)  kubelet          Node multinode-459000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m3s (x2 over 8m4s)  kubelet          Node multinode-459000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m3s (x2 over 8m4s)  kubelet          Node multinode-459000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m2s                 node-controller  Node multinode-459000-m02 event: Registered Node multinode-459000-m02 in Controller
	  Normal  NodeReady                7m41s                kubelet          Node multinode-459000-m02 status is now: NodeReady
	  Normal  RegisteredNode           4m49s                node-controller  Node multinode-459000-m02 event: Registered Node multinode-459000-m02 in Controller
	  Normal  NodeNotReady             4m9s                 node-controller  Node multinode-459000-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.008116] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.724977] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006928] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.840491] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.241289] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.368127] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.096645] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +1.842542] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.248713] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.115189] systemd-fstab-generator[822]: Ignoring "noauto" option for root device
	[  +0.115757] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +2.451883] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.103391] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +0.109908] systemd-fstab-generator[1075]: Ignoring "noauto" option for root device
	[  +0.053000] kauditd_printk_skb: 239 callbacks suppressed
	[  +0.077555] systemd-fstab-generator[1090]: Ignoring "noauto" option for root device
	[  +0.410106] systemd-fstab-generator[1217]: Ignoring "noauto" option for root device
	[  +1.408579] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +4.610250] kauditd_printk_skb: 128 callbacks suppressed
	[  +2.937326] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[Sep 6 19:21] kauditd_printk_skb: 72 callbacks suppressed
	[ +11.432898] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [487be703273e] <==
	{"level":"info","ts":"2024-09-06T19:16:48.607960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became leader at term 2"}
	{"level":"info","ts":"2024-09-06T19:16:48.607989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bb09f1afbf61f63 elected leader 1bb09f1afbf61f63 at term 2"}
	{"level":"info","ts":"2024-09-06T19:16:48.639274Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1bb09f1afbf61f63","local-member-attributes":"{Name:multinode-459000 ClientURLs:[https://192.169.0.33:2379]}","request-path":"/0/members/1bb09f1afbf61f63/attributes","cluster-id":"cece6eff570a9df4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:16:48.639417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:16:48.639904Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:16:48.641090Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:16:48.641117Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:16:48.646499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:16:48.641728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:16:48.647567Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:16:48.650625Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:16:48.651565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.33:2379"}
	{"level":"info","ts":"2024-09-06T19:16:48.652060Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cece6eff570a9df4","local-member-id":"1bb09f1afbf61f63","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:16:48.653245Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:16:48.653358Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:19:56.487570Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T19:19:56.487605Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-459000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.33:2380"],"advertise-client-urls":["https://192.169.0.33:2379"]}
	{"level":"warn","ts":"2024-09-06T19:19:56.487653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:19:56.487778Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:19:56.579680Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.33:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:19:56.579709Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.33:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:19:56.579976Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1bb09f1afbf61f63","current-leader-member-id":"1bb09f1afbf61f63"}
	{"level":"info","ts":"2024-09-06T19:19:56.586275Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:19:56.586377Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:19:56.586386Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-459000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.33:2380"],"advertise-client-urls":["https://192.169.0.33:2379"]}
	
	
	==> etcd [d96e2b3df639] <==
	{"level":"info","ts":"2024-09-06T19:20:48.959305Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:20:48.959389Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:20:48.959399Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:20:48.959100Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:20:48.961760Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T19:20:48.963778Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:20:48.964314Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.169.0.33:2380"}
	{"level":"info","ts":"2024-09-06T19:20:48.964470Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1bb09f1afbf61f63","initial-advertise-peer-urls":["https://192.169.0.33:2380"],"listen-peer-urls":["https://192.169.0.33:2380"],"advertise-client-urls":["https://192.169.0.33:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.33:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:20:48.964489Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:20:49.546042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:20:49.546093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:20:49.546111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 received MsgPreVoteResp from 1bb09f1afbf61f63 at term 2"}
	{"level":"info","ts":"2024-09-06T19:20:49.546120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.546125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 received MsgVoteResp from 1bb09f1afbf61f63 at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.546132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1bb09f1afbf61f63 became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.546313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1bb09f1afbf61f63 elected leader 1bb09f1afbf61f63 at term 3"}
	{"level":"info","ts":"2024-09-06T19:20:49.549923Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1bb09f1afbf61f63","local-member-attributes":"{Name:multinode-459000 ClientURLs:[https://192.169.0.33:2379]}","request-path":"/0/members/1bb09f1afbf61f63/attributes","cluster-id":"cece6eff570a9df4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:20:49.550025Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:20:49.551974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:20:49.555409Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:20:49.558334Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:20:49.560187Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:20:49.560256Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:20:49.577311Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:20:49.578028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.33:2379"}
	
	
	==> kernel <==
	 19:25:43 up 5 min,  0 users,  load average: 0.50, 0.23, 0.10
	Linux multinode-459000 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2dec5851e289] <==
	I0906 19:25:03.742191       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:25:03.742525       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:25:03.742588       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:25:13.743874       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:25:13.744113       1 main.go:299] handling current node
	I0906 19:25:13.744379       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:25:13.744561       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:25:13.744770       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:25:13.744949       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:25:23.744674       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:25:23.744789       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:25:23.745002       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:25:23.745116       1 main.go:299] handling current node
	I0906 19:25:23.745219       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:25:23.745315       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:25:33.742490       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:25:33.742515       1 main.go:299] handling current node
	I0906 19:25:33.742526       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:25:33.742532       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:25:33.743011       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:25:33.743380       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:25:43.740547       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:25:43.740658       1 main.go:299] handling current node
	I0906 19:25:43.740696       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:25:43.740760       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b2cede164434] <==
	I0906 19:19:22.128316       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:22.128367       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.2.0/24] 
	I0906 19:19:22.128941       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:22.128980       1 main.go:299] handling current node
	I0906 19:19:22.128992       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:22.128997       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:32.126841       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:32.126964       1 main.go:299] handling current node
	I0906 19:19:32.127063       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:32.127167       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:32.127358       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:32.127440       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:19:32.127630       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.169.0.35 Flags: [] Table: 0} 
	I0906 19:19:42.129810       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:42.129849       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	I0906 19:19:42.130116       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:42.130147       1 main.go:299] handling current node
	I0906 19:19:42.130156       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:42.130160       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:52.125007       1 main.go:295] Handling node with IPs: map[192.169.0.33:{}]
	I0906 19:19:52.125086       1 main.go:299] handling current node
	I0906 19:19:52.125104       1 main.go:295] Handling node with IPs: map[192.169.0.34:{}]
	I0906 19:19:52.125112       1 main.go:322] Node multinode-459000-m02 has CIDR [10.244.1.0/24] 
	I0906 19:19:52.125322       1 main.go:295] Handling node with IPs: map[192.169.0.35:{}]
	I0906 19:19:52.125368       1 main.go:322] Node multinode-459000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [95c1a9b114b1] <==
	W0906 19:19:56.508071       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.548776       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.548892       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549003       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549069       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549163       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549227       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549290       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549383       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549481       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549544       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549685       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549749       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.549832       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550068       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550143       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550196       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550251       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550304       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550358       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550413       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550468       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550727       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.550786       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:19:56.554309       1 logging.go:55] [core] [Channel #16 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d35fb0e18edb] <==
	I0906 19:20:51.213655       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:20:51.213849       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:20:51.218917       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:20:51.218957       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:20:51.222736       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:20:51.222770       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:20:51.222776       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:20:51.222780       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:20:51.223007       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:20:51.224883       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:20:51.256270       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:20:51.256498       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:20:51.259903       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:20:51.272033       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:20:51.272139       1 policy_source.go:224] refreshing policies
	I0906 19:20:51.304129       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:20:52.117425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:20:52.320268       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.33]
	I0906 19:20:52.321132       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:20:52.325993       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:20:53.143451       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:20:53.270181       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:20:53.281490       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:20:53.339474       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:20:53.344943       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [f1e4bf251567] <==
	I0906 19:20:54.786796       1 shared_informer.go:320] Caches are synced for cronjob
	I0906 19:20:55.177282       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:20:55.187799       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:20:55.187844       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:21:11.876376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000"
	I0906 19:21:11.876668       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:21:11.884112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000"
	I0906 19:21:14.684029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000"
	I0906 19:21:24.407113       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="32.695µs"
	I0906 19:21:25.520620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="3.781757ms"
	I0906 19:21:25.520669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.207µs"
	I0906 19:21:25.535379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.955237ms"
	I0906 19:21:25.535680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="244.452µs"
	I0906 19:21:34.693436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:21:34.693677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:21:34.697965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m02"
	I0906 19:21:34.710040       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:21:34.710499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m02"
	I0906 19:21:34.718186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.833991ms"
	I0906 19:21:34.718730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.541µs"
	I0906 19:21:39.774502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m02"
	I0906 19:21:49.864707       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:23:11.434460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:23:11.440892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:25:41.553356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	
	
	==> kube-controller-manager [fde17951087f] <==
	I0906 19:18:31.854324       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-459000-m03"
	I0906 19:18:31.908645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:41.268862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:53.418273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:53.418643       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:18:53.424043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:18:56.866156       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.843606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.854973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.997581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:23.997845       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:19:25.001102       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:19:25.002613       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-459000-m03\" does not exist"
	I0906 19:19:25.009068       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-459000-m03" podCIDRs=["10.244.3.0/24"]
	I0906 19:19:25.009104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:25.009119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:25.013998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:25.864408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:26.156277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:26.956082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:35.394991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:43.273729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-459000-m02"
	I0906 19:19:43.274191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:43.280958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	I0906 19:19:46.884789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-459000-m03"
	
	
	==> kube-proxy [2788eae2c4b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:20:52.791025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:20:52.821823       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.33"]
	E0906 19:20:52.821889       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:20:52.871039       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:20:52.871061       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:20:52.871077       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:20:52.875061       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:20:52.875526       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:20:52.875596       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:20:52.877336       1 config.go:197] "Starting service config controller"
	I0906 19:20:52.877515       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:20:52.885155       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:20:52.877746       1 config.go:326] "Starting node config controller"
	I0906 19:20:52.885623       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:20:52.885628       1 shared_informer.go:320] Caches are synced for node config
	I0906 19:20:52.881916       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:20:52.887471       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:20:52.887477       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e4605e60128b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:16:58.347702       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:16:58.357397       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.169.0.33"]
	E0906 19:16:58.357448       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:16:58.407427       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:16:58.407503       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:16:58.407523       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:16:58.444815       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:16:58.446848       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:16:58.446878       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:16:58.448577       1 config.go:197] "Starting service config controller"
	I0906 19:16:58.448610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:16:58.448626       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:16:58.448630       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:16:58.450713       1 config.go:326] "Starting node config controller"
	I0906 19:16:58.450741       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:16:58.549832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:16:58.549836       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:16:58.551579       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7158af8be341] <==
	E0906 19:16:49.915023       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 19:16:49.910625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 19:16:49.915284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 19:16:49.915555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:16:49.915841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:16:49.916137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:16:49.917025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910756       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:16:49.917348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:49.910821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 19:16:49.917930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.751396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 19:16:50.751615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.890423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:16:50.890467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.909016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:16:50.909062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:16:50.968205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:16:50.968248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 19:16:51.507806       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:19:56.483090       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a58e4533fa0a] <==
	I0906 19:20:49.414242       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:20:51.191401       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:20:51.191656       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:20:51.191788       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:20:51.191929       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:20:51.212149       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:20:51.212314       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:20:51.214484       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:20:51.215369       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:20:51.216409       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:20:51.216221       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:20:51.320542       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:21:07 multinode-459000 kubelet[1357]: E0906 19:21:07.954390    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-b9hnk" podUID="ccbaa8a5-a216-4cec-a0bb-d3979a865144"
	Sep 06 19:21:23 multinode-459000 kubelet[1357]: I0906 19:21:23.361664    1357 scope.go:117] "RemoveContainer" containerID="b8675b45ba97ecf6fcbc195cc754e085b88aa6669460fab127e3e88567afe358"
	Sep 06 19:21:23 multinode-459000 kubelet[1357]: I0906 19:21:23.361990    1357 scope.go:117] "RemoveContainer" containerID="015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2"
	Sep 06 19:21:23 multinode-459000 kubelet[1357]: E0906 19:21:23.362429    1357 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4e34dcf1-a1c9-464c-9680-a55570fa0319)\"" pod="kube-system/storage-provisioner" podUID="4e34dcf1-a1c9-464c-9680-a55570fa0319"
	Sep 06 19:21:33 multinode-459000 kubelet[1357]: I0906 19:21:33.954973    1357 scope.go:117] "RemoveContainer" containerID="015c097641e0cb36e92c85382989c4a23228f6b4b480d88b2ace89f8ab9c86b2"
	Sep 06 19:21:47 multinode-459000 kubelet[1357]: E0906 19:21:47.996843    1357 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:21:47 multinode-459000 kubelet[1357]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:22:47 multinode-459000 kubelet[1357]: E0906 19:22:47.988509    1357 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:22:47 multinode-459000 kubelet[1357]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:23:47 multinode-459000 kubelet[1357]: E0906 19:23:47.988334    1357 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:23:47 multinode-459000 kubelet[1357]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:23:47 multinode-459000 kubelet[1357]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:23:47 multinode-459000 kubelet[1357]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:23:47 multinode-459000 kubelet[1357]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:24:47 multinode-459000 kubelet[1357]: E0906 19:24:47.988638    1357 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:24:47 multinode-459000 kubelet[1357]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:24:47 multinode-459000 kubelet[1357]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:24:47 multinode-459000 kubelet[1357]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:24:47 multinode-459000 kubelet[1357]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-459000 -n multinode-459000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-459000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (154.45s)

                                                
                                    
x
+
TestScheduledStopUnix (141.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-642000 --memory=2048 --driver=hyperkit 
E0906 12:32:59.526099    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:33:13.458641    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-642000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.606281715s)

                                                
                                                
-- stdout --
	* [scheduled-stop-642000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-642000" primary control-plane node in "scheduled-stop-642000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-642000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:46:6e:73:5:6b
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-642000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e6:b7:df:b4:26:94
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e6:b7:df:b4:26:94
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-642000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-642000" primary control-plane node in "scheduled-stop-642000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-642000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 92:46:6e:73:5:6b
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-642000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e6:b7:df:b4:26:94
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e6:b7:df:b4:26:94
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-09-06 12:35:14.532075 -0700 PDT m=+3991.691389075
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-642000 -n scheduled-stop-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-642000 -n scheduled-stop-642000: exit status 7 (80.355793ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 12:35:14.610660   13885 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 12:35:14.610681   13885 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-642000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-642000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-642000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-642000: (5.243287325s)
--- FAIL: TestScheduledStopUnix (141.93s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (4.54s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19576
- KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2115134169/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
! Unable to update hyperkit driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.34.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.34.0/docker-machine-driver-hyperkit.sha256 Dst:/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2115134169/001/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x92dc800 0x92dc800 0x92dc800 0x92dc800 0x92dc800 0x92dc800 0x92dc800] Decompressors:map[bz2:0xc000761750 gz:0xc000761758 tar:0xc000761700 tar.bz2:0xc000761710 tar.gz:0xc000761720 tar.xz:0xc000761730 tar.zst:0xc000761740 tbz2:0xc000761710 tgz:0xc000761720 txz:0xc000761730 tzst:0xc000761740 xz:0xc000761760 zip:0xc000761770 zst:0xc000761768] Getters:map[file:0xc00075d9b0 http:0xc000530fa0 https:0xc000530ff0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: E
rror downloading checksum file: bad response code: 404
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
driver_install_or_update_test.go:218: invalid driver version. expected: v1.34.0, got: v1.2.0
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (4.54s)

                                                
                                    
x
+
TestPause/serial/Start (141.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-090000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-090000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.655286204s)

                                                
                                                
-- stdout --
	* [pause-090000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-090000" primary control-plane node in "pause-090000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-090000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:1c:c:8f:9c:73
	* Failed to start hyperkit VM. Running "minikube delete -p pause-090000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:8:fa:e8:ff:da
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:8:fa:e8:ff:da
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-090000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-090000 -n pause-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-090000 -n pause-090000: exit status 7 (80.971939ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 13:16:21.160530   16064 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0906 13:16:21.160554   16064 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-090000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (7201.842s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-251000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0
E0906 13:27:46.175812    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/enable-default-cni-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:27:51.297888    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/enable-default-cni-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:27:54.210496    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/false-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:27:59.615479    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:28:01.540270    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/enable-default-cni-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:28:13.548579    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:28:14.691635    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/false-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:28:22.022754    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/enable-default-cni-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:28:29.457307    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/custom-flannel-178000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:28:32.676817    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/calico-178000/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (51m27s)
	TestNetworkPlugins/group (2m41s)
	TestStartStop (12m56s)
	TestStartStop/group/no-preload (2m41s)
	TestStartStop/group/no-preload/serial (2m41s)
	TestStartStop/group/no-preload/serial/SecondStart (57s)
	TestStartStop/group/old-k8s-version (4m7s)
	TestStartStop/group/old-k8s-version/serial (4m7s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (1m12s)

                                                
                                                
goroutine 4560 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000669040, 0xc000ab5bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00003c360, {0x127080c0, 0x2a, 0x2a}, {0xdd406c5?, 0xfa6696d?, 0x1272b760?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00095caa0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00095caa0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00041dc80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 4571 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001f60180, 0xc0005bf680)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4568
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3386 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0016cd520, {0xfa0d91b?, 0x0?}, 0xc0017e4680)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016cd520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0016cd520, 0xc001762200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3382
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3382 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0016cc000, 0x1116a6e0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2944
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 71 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 70
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 145 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 128
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 141 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 140
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4537 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000989d10, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0009f5580?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000989d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017a5870, {0x111787e0, 0xc001ab2840}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017a5870, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 146 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000988ec0, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 128
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3457 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3440
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 140 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc00097cf50, 0xc00097cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x0?, 0xc00097cf50, 0xc00097cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xe2bab25?, 0xc00099aa20?, 0x111951c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 146
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 139 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000988e90, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00097bd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000988ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00090e030, {0x111787e0, 0xc000847410}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00090e030, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 146
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3439 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0017625d0, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0013a0d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001762600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00049a010, {0x111787e0, 0xc001470030}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00049a010, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3451
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3519 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0015a5810, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001420d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0015a5840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087d620, {0x111787e0, 0xc0015b80c0}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00087d620, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3497
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4188 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc0018fbf50, 0xc0018fbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x0?, 0xc0018fbf50, 0xc0018fbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4175
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3707 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3706
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3521 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3520
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1625 [chan receive, 97 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021a7a40, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3679 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3678
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4590 [IO wait]:
internal/poll.runtime_pollWait(0x5a13c958, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00220d5c0?, 0xc001730341?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00220d5c0, {0xc001730341, 0x1dcbf, 0x1dcbf})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00166e7e0, {0xc001730341?, 0xc0005bf8c0?, 0x1fe8e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013bc210, {0x111771a8, 0xc0005c4eb0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x111772e8, 0xc0013bc210}, {0x111771a8, 0xc0005c4eb0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x3a22657079742220?, {0x111772e8, 0xc0013bc210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x126c8d10?, {0x111772e8?, 0xc0013bc210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x111772e8, 0xc0013bc210}, {0x11177268, 0xc00166e7e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x44414f52423c203a?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4588
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 1609 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0021a7a10, 0x28)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001596d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021a7a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087e330, {0x111787e0, 0xc001334360}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00087e330, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1625
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3949 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3948
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3692 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3691
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3383 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0016cc1a0, {0xfa0d91b?, 0x0?}, 0xc0021ca580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016cc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0016cc1a0, 0xc001762140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3382
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3440 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc000112f50, 0xc000112f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x80?, 0xc000112f50, 0xc000112f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc001ae0f00?, 0xc0021ca280?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xddfa7e5?, 0xc001436a80?, 0xc001ae0a80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3451
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4135 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0015a44d0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0009f5580?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0015a4500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087fbc0, {0x111787e0, 0xc0018f43c0}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00087fbc0, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4149
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4148 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2832 [chan receive, 52 minutes]:
testing.(*T).Run(0xc0006681a0, {0xfa0c2c1?, 0xb779483a6e8?}, 0xc00148f5f0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0006681a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0006681a0, 0x1116a538)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4538 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc0018f6f50, 0xc0018f6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x98?, 0xc0018f6f50, 0xc0018f6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc0022ca9c0?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0018f6fd0?, 0xddfa844?, 0xc001920e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4525
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3948 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc001431750, 0xc001431798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x20?, 0xc001431750, 0xc001431798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc0016cdd40?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xddfa7e5?, 0xc001f61500?, 0xc0022a5620?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3686 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3673
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2918 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3496 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3492
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3706 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0xe0?, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xe231876?, 0xc001b10900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001b10900?, 0xe241fc5?, 0xc00085d100?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3693
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4149 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0015a4500, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1611 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1610
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4568 [syscall, 2 minutes]:
syscall.syscall6(0xc0018f5f80?, 0x1000000000010?, 0x10000000019?, 0x59c86b58?, 0x90?, 0x131c45b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001361b48?, 0xdc810c5?, 0x90?, 0x110d3e00?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xddb1885?, 0xc001361b7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0014da840)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001f60180)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001f60180)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0022caea0, 0xc001f60180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x1119e950, 0xc0004d41c0}, 0xc0022caea0, {0xc00192cbe8, 0x16}, {0x117281c801486f58?, 0xc001486f60?}, {0xddb3c13?, 0xdd0bc6f?}, {0xc001f60480, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0022caea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0022caea0, 0xc001921d80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4369
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3497 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0015a5840, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3492
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2912 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0015a4290, 0x1c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc000977d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0015a42c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019ee000, {0x111787e0, 0xc001492030}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019ee000, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2919
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4569 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5a13cf28, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0015372c0?, 0xc0013b6aa8?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0015372c0, {0xc0013b6aa8, 0x558, 0x558})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00166e5c0, {0xc0013b6aa8?, 0xc0016d6c40?, 0x202?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018f4360, {0x111771a8, 0xc0005c4cc8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x111772e8, 0xc0018f4360}, {0x111771a8, 0xc0005c4cc8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001486678?, {0x111772e8, 0xc0018f4360})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x126c8d10?, {0x111772e8?, 0xc0018f4360?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x111772e8, 0xc0018f4360}, {0x11177268, 0xc00166e5c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0005bf2c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4568
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3450 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3449
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3387 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc00075d950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016cd6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016cd6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016cd6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0016cd6c0, 0xc001762240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3382
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3916 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3915
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2929 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc0009f9750, 0xc001360f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x60?, 0xc0009f9750, 0xc0009f9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xe231876?, 0xc0015ca300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0009f97d0?, 0xddfa844?, 0xc0018b6660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2919
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1624 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1511
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3520 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc000117f50, 0xc000117f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x10?, 0xc000117f50, 0xc000117f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc00056ba00?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000117fd0?, 0xddfa844?, 0xc0015ac750?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3497
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3385 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc00075d950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016cd380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016cd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016cd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0016cd380, 0xc0017621c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3382
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1374 [IO wait, 99 minutes]:
internal/poll.runtime_pollWait(0x5a13d020, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0021ca180?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0021ca180)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0021ca180)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00228c280)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00228c280)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0002305a0, {0x111915e0, 0xc00228c280})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0002305a0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0022ca340?, 0xc0022ca680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1371
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2944 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0016ccb60, {0xfa0c2c1?, 0xddb3c13?}, 0x1116a6e0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0016ccb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0016ccb60, 0x1116a580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2919 [chan receive, 52 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0015a42c0, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2025 [select, 95 minutes]:
net/http.(*persistConn).readLoop(0xc001809b00)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 2037
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1973 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0xc00183ec00, 0xc0013c3440)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1498
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2930 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2929
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4570 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5a13ce30, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001537440?, 0xc0015e0a76?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001537440, {0xc0015e0a76, 0x358a, 0x358a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00166e5e8, {0xc0015e0a76?, 0xddf8925?, 0x7e5b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018f4390, {0x111771a8, 0xc0005c4cd0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x111772e8, 0xc0018f4390}, {0x111771a8, 0xc0005c4cd0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x1263ab80?, {0x111772e8, 0xc0018f4390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x126c8d10?, {0x111772e8?, 0xc0018f4390?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x111772e8, 0xc0018f4390}, {0x11177268, 0xc00166e5e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001921d80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4568
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 4369 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00144c000, {0xfa19552?, 0x60400000004?}, 0xc001921d80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00144c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00144c000, 0xc0021ca580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3383
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1610 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc001475f50, 0xc001362f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x40?, 0xc001475f50, 0xc001475f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc0022caea0?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001475fd0?, 0xddfa844?, 0xc0022a4840?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1625
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1925 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017a6900, 0xc000059980)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1924
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3388 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc00075d950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016cd860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016cd860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016cd860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0016cd860, 0xc0017622c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3382
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4589 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x5a13c768, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00220d500?, 0xc0014cac92?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00220d500, {0xc0014cac92, 0x36e, 0x36e})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00166e7c8, {0xc0014cac92?, 0xddf89da?, 0x22a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013bc180, {0x111771a8, 0xc0005c4ea8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x111772e8, 0xc0013bc180}, {0x111771a8, 0xc0005c4ea8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x1263ab80?, {0x111772e8, 0xc0013bc180})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x126c8d10?, {0x111772e8?, 0xc0013bc180?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x111772e8, 0xc0013bc180}, {0x11177268, 0xc00166e7c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0021cac80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4588
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3384 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc00075d950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0016ccd00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0016ccd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0016ccd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0016ccd00, 0xc001762180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3382
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2961 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc00075d950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc000669d40, 0xc00148f5f0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2832
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1779 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b10480, 0xc0018b6300)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1778
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 4478 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0016cdba0, {0xfa19552?, 0x60400000004?}, 0xc0021cac80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0016cdba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0016cdba0, 0xc0017e4680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3386
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3451 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001762600, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3449
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4591 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001f61680, 0xc000058780)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4588
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2026 [select, 95 minutes]:
net/http.(*persistConn).writeLoop(0xc001809b00)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 2037
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1966 [chan send, 95 minutes]:
os/exec.(*Cmd).watchCtx(0xc00183e600, 0xc0013c2e40)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1965
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3677 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0021505d0, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001363d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002150600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000840910, {0x111787e0, 0xc0017ae4e0}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000840910, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3687
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3705 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002151c10, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00141ed80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002151c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aade30, {0x111787e0, 0xc001bf31d0}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aade30, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3693
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4187 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0002134d0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0018fb580?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000213a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aad920, {0x111787e0, 0xc00142a390}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aad920, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4175
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3687 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002150600, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3673
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3678 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc001479f50, 0xc001479f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x80?, 0xc001479f50, 0xc001479f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc00056a820?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001479fd0?, 0xddfa844?, 0xc0014501b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3687
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3693 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002151c40, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3691
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3961 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00052f100, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3956
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4388 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc0009f9750, 0xc0009f9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0xc0?, 0xc0009f9750, 0xc0009f9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc00144cb60?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0009f97d0?, 0xddfa844?, 0xc0017ae660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4377
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 3915 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc001431f50, 0xc001431f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x20?, 0xc001431f50, 0xc001431f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xc00056ba00?, 0xddb4540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xddfa7e5?, 0xc0021b8180?, 0xc000058720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3933
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4500 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000abacd0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0018fc580?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000abad40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017a4370, {0x111787e0, 0xc0017ae390}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017a4370, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4489
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3932 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3928
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3947 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00052f0d0, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001365d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00052f100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001aac180, {0x111787e0, 0xc001471800}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001aac180, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3914 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000569b10, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc00141cd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000569b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017a4260, {0x111787e0, 0xc001a7c240}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017a4260, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3933
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 3933 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000569b40, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3928
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3960 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3956
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4376 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4367
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4137 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4136
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4175 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000213a00, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4136 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc001430f50, 0xc001430f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x0?, 0xc001430f50, 0xc001430f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0xe231876?, 0xc0014eaf00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001430fd0?, 0xddfa844?, 0xc0001fa600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4149
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4174 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4189 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4188
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4524 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4533
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4489 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000abad40, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4484
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4501 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1119eb10, 0xc0005be120}, 0xc0018f9f50, 0xc0018f9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1119eb10, 0xc0005be120}, 0x0?, 0xc0018f9f50, 0xc0018f9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1119eb10?, 0xc0005be120?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xe2bab25?, 0xc001906120?, 0x111951c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4489
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 4525 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000989d40, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4533
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4539 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4538
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4387 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001762a90, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0009fad80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x111b8c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001762ac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00087efd0, {0x111787e0, 0xc00158de60}, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00087efd0, 0x3b9aca00, 0x0, 0x1, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4377
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 4389 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4388
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4377 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001762ac0, 0xc0005be120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4367
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4588 [syscall, 2 minutes]:
syscall.syscall6(0xc0013bdf80?, 0x1000000000010?, 0x10000000019?, 0x59c86b58?, 0x90?, 0x131c45b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001793b48?, 0xdc810c5?, 0x90?, 0x110d3e00?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xddb1885?, 0xc001793b7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0014dbfb0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001f61680)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001f61680)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0022cba00, 0xc001f61680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x1119e950, 0xc000526000}, 0xc0022cba00, {0xc001355ef0, 0x11}, {0x41ab860018f7f58?, 0xc0018f7f60?}, {0xddb3c13?, 0xdd0bc6f?}, {0xc001443a00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0022cba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0022cba00, 0xc0021cac80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4478
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4488 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x111951c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4484
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 4502 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4501
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (176/219)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.0/json-events 8.41
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.29
18 TestDownloadOnly/v1.31.0/DeleteAll 0.23
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
21 TestBinaryMirror 0.97
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 223.59
31 TestAddons/serial/GCPAuth/Namespaces 0.1
34 TestAddons/parallel/Ingress 20.2
35 TestAddons/parallel/InspektorGadget 10.76
36 TestAddons/parallel/MetricsServer 5.46
37 TestAddons/parallel/HelmTiller 11.16
39 TestAddons/parallel/CSI 54.43
40 TestAddons/parallel/Headlamp 19.38
41 TestAddons/parallel/CloudSpanner 5.41
42 TestAddons/parallel/LocalPath 54.67
43 TestAddons/parallel/NvidiaDevicePlugin 6.52
44 TestAddons/parallel/Yakd 10.45
45 TestAddons/StoppedEnableDisable 5.94
53 TestHyperKitDriverInstallOrUpdate 8.15
56 TestErrorSpam/setup 36.41
57 TestErrorSpam/start 1.72
58 TestErrorSpam/status 0.49
59 TestErrorSpam/pause 1.36
60 TestErrorSpam/unpause 1.44
61 TestErrorSpam/stop 155.84
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 79.18
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.26
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.99
73 TestFunctional/serial/CacheCmd/cache/add_local 1.31
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
78 TestFunctional/serial/CacheCmd/cache/delete 0.16
79 TestFunctional/serial/MinikubeKubectlCmd 1.28
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.65
81 TestFunctional/serial/ExtraConfig 39.33
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.6
84 TestFunctional/serial/LogsFileCmd 2.71
85 TestFunctional/serial/InvalidService 3.71
87 TestFunctional/parallel/ConfigCmd 0.51
88 TestFunctional/parallel/DashboardCmd 14.49
89 TestFunctional/parallel/DryRun 1.26
90 TestFunctional/parallel/InternationalLanguage 0.55
91 TestFunctional/parallel/StatusCmd 0.53
95 TestFunctional/parallel/ServiceCmdConnect 6.59
96 TestFunctional/parallel/AddonsCmd 0.24
97 TestFunctional/parallel/PersistentVolumeClaim 26.06
99 TestFunctional/parallel/SSHCmd 0.34
100 TestFunctional/parallel/CpCmd 0.95
101 TestFunctional/parallel/MySQL 25.87
102 TestFunctional/parallel/FileSync 0.16
103 TestFunctional/parallel/CertSync 0.9
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.13
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.35
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.28
119 TestFunctional/parallel/ImageCommands/Setup 1.71
120 TestFunctional/parallel/DockerEnv/bash 0.53
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.84
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.63
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.38
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
131 TestFunctional/parallel/ServiceCmd/DeployApp 20.13
132 TestFunctional/parallel/ServiceCmd/List 0.18
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.18
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.24
135 TestFunctional/parallel/ServiceCmd/Format 0.25
136 TestFunctional/parallel/ServiceCmd/URL 0.32
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
149 TestFunctional/parallel/ProfileCmd/profile_list 0.26
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
151 TestFunctional/parallel/MountCmd/any-port 6.02
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.27
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 193.35
161 TestMultiControlPlane/serial/DeployApp 4.87
162 TestMultiControlPlane/serial/PingHostFromPods 1.29
163 TestMultiControlPlane/serial/AddWorkerNode 53.36
164 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.35
166 TestMultiControlPlane/serial/CopyFile 9.3
167 TestMultiControlPlane/serial/StopSecondaryNode 8.71
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.3
169 TestMultiControlPlane/serial/RestartSecondaryNode 44.67
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.34
181 TestImageBuild/serial/Setup 37.61
182 TestImageBuild/serial/NormalBuild 1.66
183 TestImageBuild/serial/BuildWithBuildArg 0.85
184 TestImageBuild/serial/BuildWithDockerIgnore 0.61
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.68
189 TestJSONOutput/start/Command 48.52
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.47
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.46
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.33
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.58
217 TestMainNoArgs 0.08
218 TestMinikubeProfile 86.26
224 TestMultiNode/serial/FreshStart2Nodes 106.43
225 TestMultiNode/serial/DeployApp2Nodes 5.22
226 TestMultiNode/serial/PingHostFrom2Pods 0.93
227 TestMultiNode/serial/AddNode 45.33
228 TestMultiNode/serial/MultiNodeLabels 0.05
229 TestMultiNode/serial/ProfileList 0.18
230 TestMultiNode/serial/CopyFile 5.3
231 TestMultiNode/serial/StopNode 2.83
232 TestMultiNode/serial/StartAfterStop 41.63
235 TestMultiNode/serial/StopMultiNode 16.78
236 TestMultiNode/serial/RestartMultiNode 215.81
237 TestMultiNode/serial/ValidateNameConflict 45.83
241 TestPreload 140.47
244 TestSkaffold 116.32
247 TestRunningBinaryUpgrade 92.81
249 TestKubernetesUpgrade 1348.16
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.15
264 TestStoppedBinaryUpgrade/Setup 1.29
265 TestStoppedBinaryUpgrade/Upgrade 123.68
268 TestStoppedBinaryUpgrade/MinikubeLogs 2.48
277 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
278 TestNoKubernetes/serial/StartWithK8s 71.62
280 TestNoKubernetes/serial/StartWithStopK8s 8.68
281 TestNoKubernetes/serial/Start 21.91
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
283 TestNoKubernetes/serial/ProfileList 0.45
284 TestNoKubernetes/serial/Stop 2.38
285 TestNoKubernetes/serial/StartNoArgs 19.28
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (21.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-747000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-747000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (21.568838671s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-747000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-747000: exit status 85 (294.903526ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |          |
	|         | -p download-only-747000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:28:42
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:28:42.826658    8366 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:28:42.826946    8366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:42.826953    8366 out.go:358] Setting ErrFile to fd 2...
	I0906 11:28:42.826957    8366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:28:42.827117    8366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	W0906 11:28:42.827225    8366 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19576-7784/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19576-7784/.minikube/config/config.json: no such file or directory
	I0906 11:28:42.829008    8366 out.go:352] Setting JSON to true
	I0906 11:28:42.851862    8366 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8893,"bootTime":1725638429,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 11:28:42.851967    8366 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:28:42.874274    8366 out.go:97] [download-only-747000] minikube v1.34.0 on Darwin 14.6.1
	W0906 11:28:42.874414    8366 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 11:28:42.874465    8366 notify.go:220] Checking for updates...
	I0906 11:28:42.896027    8366 out.go:169] MINIKUBE_LOCATION=19576
	I0906 11:28:42.919179    8366 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:28:42.942166    8366 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 11:28:42.963193    8366 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:28:43.006048    8366 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	W0906 11:28:43.047890    8366 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 11:28:43.048415    8366 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:28:43.079008    8366 out.go:97] Using the hyperkit driver based on user configuration
	I0906 11:28:43.079069    8366 start.go:297] selected driver: hyperkit
	I0906 11:28:43.079084    8366 start.go:901] validating driver "hyperkit" against <nil>
	I0906 11:28:43.079302    8366 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:28:43.079542    8366 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 11:28:43.275033    8366 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 11:28:43.280297    8366 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:28:43.280318    8366 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 11:28:43.280344    8366 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:28:43.283193    8366 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0906 11:28:43.283339    8366 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 11:28:43.283368    8366 cni.go:84] Creating CNI manager for ""
	I0906 11:28:43.283383    8366 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 11:28:43.283456    8366 start.go:340] cluster config:
	{Name:download-only-747000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:28:43.283678    8366 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:28:43.305532    8366 out.go:97] Downloading VM boot image ...
	I0906 11:28:43.305670    8366 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 11:28:51.690360    8366 out.go:97] Starting "download-only-747000" primary control-plane node in "download-only-747000" cluster
	I0906 11:28:51.690385    8366 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:51.749214    8366 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0906 11:28:51.749264    8366 cache.go:56] Caching tarball of preloaded images
	I0906 11:28:51.749631    8366 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:28:51.769748    8366 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 11:28:51.769757    8366 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 11:28:51.846710    8366 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0906 11:29:01.657149    8366 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 11:29:01.657402    8366 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 11:29:02.207889    8366 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0906 11:29:02.208148    8366 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/download-only-747000/config.json ...
	I0906 11:29:02.208173    8366 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/download-only-747000/config.json: {Name:mka3415e44a3ca1386a74168adbc24ea171cfe66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 11:29:02.208493    8366 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 11:29:02.208802    8366 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-747000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-747000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-747000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (8.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-709000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-709000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=hyperkit : (8.405260767s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (8.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-709000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-709000: exit status 85 (291.832131ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:28 PDT |                     |
	|         | -p download-only-747000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| delete  | -p download-only-747000        | download-only-747000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT | 06 Sep 24 11:29 PDT |
	| start   | -o=json --download-only        | download-only-709000 | jenkins | v1.34.0 | 06 Sep 24 11:29 PDT |                     |
	|         | -p download-only-709000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 11:29:05
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 11:29:05.134518    8396 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:29:05.134780    8396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:05.134787    8396 out.go:358] Setting ErrFile to fd 2...
	I0906 11:29:05.134791    8396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:29:05.134977    8396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 11:29:05.136379    8396 out.go:352] Setting JSON to true
	I0906 11:29:05.158618    8396 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":8916,"bootTime":1725638429,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 11:29:05.158710    8396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:29:05.180715    8396 out.go:97] [download-only-709000] minikube v1.34.0 on Darwin 14.6.1
	I0906 11:29:05.180958    8396 notify.go:220] Checking for updates...
	I0906 11:29:05.202553    8396 out.go:169] MINIKUBE_LOCATION=19576
	I0906 11:29:05.223720    8396 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:29:05.245671    8396 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 11:29:05.266763    8396 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:29:05.289665    8396 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	W0906 11:29:05.331609    8396 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 11:29:05.332144    8396 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:29:05.362693    8396 out.go:97] Using the hyperkit driver based on user configuration
	I0906 11:29:05.362749    8396 start.go:297] selected driver: hyperkit
	I0906 11:29:05.362763    8396 start.go:901] validating driver "hyperkit" against <nil>
	I0906 11:29:05.362962    8396 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:05.363226    8396 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19576-7784/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0906 11:29:05.373107    8396 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.34.0
	I0906 11:29:05.377666    8396 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:29:05.377696    8396 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0906 11:29:05.377725    8396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 11:29:05.380400    8396 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0906 11:29:05.380544    8396 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 11:29:05.380575    8396 cni.go:84] Creating CNI manager for ""
	I0906 11:29:05.380590    8396 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 11:29:05.380599    8396 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 11:29:05.380681    8396 start.go:340] cluster config:
	{Name:download-only-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-709000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:29:05.380766    8396 iso.go:125] acquiring lock: {Name:mkc3f7015e4c5d24b146cfa551092542fe897e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 11:29:05.401442    8396 out.go:97] Starting "download-only-709000" primary control-plane node in "download-only-709000" cluster
	I0906 11:29:05.401469    8396 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:05.466867    8396 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0906 11:29:05.466939    8396 cache.go:56] Caching tarball of preloaded images
	I0906 11:29:05.467453    8396 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 11:29:05.489269    8396 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0906 11:29:05.489290    8396 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0906 11:29:05.571290    8396 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19576-7784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-709000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-709000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-709000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.97s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-050000 --alsologtostderr --binary-mirror http://127.0.0.1:53785 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-050000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-050000
--- PASS: TestBinaryMirror (0.97s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-565000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-565000: exit status 85 (208.92781ms)

                                                
                                                
-- stdout --
	* Profile "addons-565000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-565000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-565000: exit status 85 (187.234185ms)

                                                
                                                
-- stdout --
	* Profile "addons-565000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (223.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-565000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-565000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m43.590109138s)
--- PASS: TestAddons/Setup (223.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-565000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-565000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-565000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-565000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-565000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0a3907ed-90a7-40fa-a63f-ea9475e3d2cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0a3907ed-90a7-40fa-a63f-ea9475e3d2cd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004208985s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-565000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.21
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 addons disable ingress --alsologtostderr -v=1: (7.454106311s)
--- PASS: TestAddons/parallel/Ingress (20.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6j9m6" [c7dfb082-50ec-4c2d-8cdc-84392970a307] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004547902s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-565000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-565000: (5.757712955s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.520362ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-s7gvk" [97f3e6bc-d390-4600-b657-42ea5558f40a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003348654s
addons_test.go:417: (dbg) Run:  kubectl --context addons-565000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.46s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.836888ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-xhknj" [0749425f-a61d-4e70-9b16-7d3819962a26] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003383381s
addons_test.go:475: (dbg) Run:  kubectl --context addons-565000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-565000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.748467575s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.205252ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-565000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-565000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9e0a3a02-db62-4225-af0a-61b6b09430ff] Pending
helpers_test.go:344: "task-pv-pod" [9e0a3a02-db62-4225-af0a-61b6b09430ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9e0a3a02-db62-4225-af0a-61b6b09430ff] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003236326s
addons_test.go:590: (dbg) Run:  kubectl --context addons-565000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-565000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-565000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-565000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-565000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [326e8c29-a8aa-4f17-82e9-9ff81a8a9f25] Pending
helpers_test.go:344: "task-pv-pod-restore" [326e8c29-a8aa-4f17-82e9-9ff81a8a9f25] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [326e8c29-a8aa-4f17-82e9-9ff81a8a9f25] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004617609s
addons_test.go:632: (dbg) Run:  kubectl --context addons-565000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-565000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-565000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.424968333s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-565000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-jtbtx" [4284e1ef-5800-4973-bb2b-f6e95b5d5b71] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-jtbtx" [4284e1ef-5800-4973-bb2b-f6e95b5d5b71] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004358183s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 addons disable headlamp --alsologtostderr -v=1: (5.463848666s)
--- PASS: TestAddons/parallel/Headlamp (19.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-gkpvx" [11667f7c-c4d1-48b2-b5f2-9d23948aa4c8] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003305262s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-565000
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.67s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-565000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-565000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b8602a25-3101-44d9-a71a-a40516dfac7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b8602a25-3101-44d9-a71a-a40516dfac7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b8602a25-3101-44d9-a71a-a40516dfac7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003207539s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-565000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 ssh "cat /opt/local-path-provisioner/pvc-9e845a53-9b9e-407e-94ba-aac6318fa583_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-565000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-565000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.982471469s)
--- PASS: TestAddons/parallel/LocalPath (54.67s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2d5x4" [1aa1523c-25e2-4776-be7b-082ac60d2875] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.002997322s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-565000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vz5pv" [eed8757b-de74-4154-b3a6-cbe152481ab1] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003448329s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-565000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-565000 addons disable yakd --alsologtostderr -v=1: (5.448581048s)
--- PASS: TestAddons/parallel/Yakd (10.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.94s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-565000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-565000: (5.379115926s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-565000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-565000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-565000
--- PASS: TestAddons/StoppedEnableDisable (5.94s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.15s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.15s)

                                                
                                    
x
+
TestErrorSpam/setup (36.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-778000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-778000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 --driver=hyperkit : (36.414094585s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (36.41s)

                                                
                                    
x
+
TestErrorSpam/start (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 start --dry-run
--- PASS: TestErrorSpam/start (1.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 pause
--- PASS: TestErrorSpam/pause (1.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (155.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 stop: (5.410544251s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 stop
E0906 11:47:59.465251    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:59.473980    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:59.485543    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:59.509157    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:59.551055    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:59.634656    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:47:59.798243    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:00.121870    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:00.764499    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:02.046748    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:04.609929    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:09.731950    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:19.975545    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:48:40.459183    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 stop: (1m15.226470454s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 stop
E0906 11:49:21.422713    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-778000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-778000 stop: (1m15.2049161s)
--- PASS: TestErrorSpam/stop (155.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19576-7784/.minikube/files/etc/test/nested/copy/8364/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0906 11:50:43.345056    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m19.17986683s)
--- PASS: TestFunctional/serial/StartWithProxy (79.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123000 --alsologtostderr -v=8: (41.255185356s)
functional_test.go:663: soft start took 41.255709548s for "functional-123000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-123000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 cache add registry.k8s.io/pause:3.1: (1.02317305s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 cache add registry.k8s.io/pause:3.3: (1.047085709s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2531254154/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cache add minikube-local-cache-test:functional-123000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cache delete minikube-local-cache-test:functional-123000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-123000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (147.218962ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 cache reload: (1.565818725s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 kubectl -- --context functional-123000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 kubectl -- --context functional-123000 get pods: (1.284583615s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-123000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-123000 get pods: (1.644754514s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0906 11:52:59.509483    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-123000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.33377935s)
functional_test.go:761: restart took 39.333881809s for "functional-123000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-123000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 logs: (2.604460191s)
--- PASS: TestFunctional/serial/LogsCmd (2.60s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3412216326/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3412216326/001/logs.txt: (2.713352062s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-123000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-123000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-123000: exit status 115 (272.389345ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.23:31566 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-123000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 config get cpus: exit status 14 (66.858126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 config get cpus: exit status 14 (57.093443ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-123000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-123000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 10280: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-123000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (807.942533ms)

                                                
                                                
-- stdout --
	* [functional-123000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:54:05.707850   10235 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:54:05.708190   10235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:54:05.708198   10235 out.go:358] Setting ErrFile to fd 2...
	I0906 11:54:05.708205   10235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:54:05.708429   10235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 11:54:05.766190   10235 out.go:352] Setting JSON to false
	I0906 11:54:05.789900   10235 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10416,"bootTime":1725638429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 11:54:05.790000   10235 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:54:05.865884   10235 out.go:177] * [functional-123000] minikube v1.34.0 on Darwin 14.6.1
	I0906 11:54:05.887007   10235 notify.go:220] Checking for updates...
	I0906 11:54:05.907602   10235 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:54:05.949838   10235 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:54:05.990804   10235 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 11:54:06.053765   10235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:54:06.115770   10235 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 11:54:06.157776   10235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:54:06.179161   10235 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:54:06.179512   10235 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:54:06.179564   10235 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:54:06.188924   10235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55043
	I0906 11:54:06.189335   10235 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:54:06.189772   10235 main.go:141] libmachine: Using API Version  1
	I0906 11:54:06.189782   10235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:54:06.190031   10235 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:54:06.190165   10235 main.go:141] libmachine: (functional-123000) Calling .DriverName
	I0906 11:54:06.190386   10235 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:54:06.190644   10235 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:54:06.190669   10235 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:54:06.199395   10235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55048
	I0906 11:54:06.199748   10235 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:54:06.200116   10235 main.go:141] libmachine: Using API Version  1
	I0906 11:54:06.200136   10235 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:54:06.200381   10235 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:54:06.200487   10235 main.go:141] libmachine: (functional-123000) Calling .DriverName
	I0906 11:54:06.278881   10235 out.go:177] * Using the hyperkit driver based on existing profile
	I0906 11:54:06.299678   10235 start.go:297] selected driver: hyperkit
	I0906 11:54:06.299696   10235 start.go:901] validating driver "hyperkit" against &{Name:functional-123000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-12
3000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.23 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:54:06.299806   10235 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:54:06.323991   10235 out.go:201] 
	W0906 11:54:06.344972   10235 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 11:54:06.365966   10235 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-123000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-123000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (545.839136ms)

                                                
                                                
-- stdout --
	* [functional-123000] minikube v1.34.0 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:54:06.909063   10261 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:54:06.909314   10261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:54:06.909320   10261 out.go:358] Setting ErrFile to fd 2...
	I0906 11:54:06.909324   10261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:54:06.909533   10261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 11:54:06.911218   10261 out.go:352] Setting JSON to false
	I0906 11:54:06.934433   10261 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10417,"bootTime":1725638429,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0906 11:54:06.934523   10261 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0906 11:54:06.956048   10261 out.go:177] * [functional-123000] minikube v1.34.0 sur Darwin 14.6.1
	I0906 11:54:06.997584   10261 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 11:54:06.997609   10261 notify.go:220] Checking for updates...
	I0906 11:54:07.039616   10261 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	I0906 11:54:07.060521   10261 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0906 11:54:07.081705   10261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 11:54:07.139359   10261 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	I0906 11:54:07.180651   10261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 11:54:07.202395   10261 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:54:07.203063   10261 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:54:07.203165   10261 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:54:07.212551   10261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55056
	I0906 11:54:07.212986   10261 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:54:07.213409   10261 main.go:141] libmachine: Using API Version  1
	I0906 11:54:07.213426   10261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:54:07.213682   10261 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:54:07.213814   10261 main.go:141] libmachine: (functional-123000) Calling .DriverName
	I0906 11:54:07.214018   10261 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 11:54:07.214268   10261 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:54:07.214295   10261 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:54:07.223345   10261 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55060
	I0906 11:54:07.223701   10261 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:54:07.224055   10261 main.go:141] libmachine: Using API Version  1
	I0906 11:54:07.224114   10261 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:54:07.224338   10261 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:54:07.224464   10261 main.go:141] libmachine: (functional-123000) Calling .DriverName
	I0906 11:54:07.252653   10261 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0906 11:54:07.294764   10261 start.go:297] selected driver: hyperkit
	I0906 11:54:07.294795   10261 start.go:901] validating driver "hyperkit" against &{Name:functional-123000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-12
3000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.23 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 11:54:07.295085   10261 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 11:54:07.320663   10261 out.go:201] 
	W0906 11:54:07.341678   10261 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 11:54:07.362537   10261 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-123000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-123000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fb5m4" [0866dcb4-b996-42f4-88f8-c87b3667b5ea] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fb5m4" [0866dcb4-b996-42f4-88f8-c87b3667b5ea] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.005117092s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.169.0.23:31079
functional_test.go:1675: http://192.169.0.23:31079: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-fb5m4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.23:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.23:31079
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d8cfa444-cbff-47d6-9fee-1d1502eef9d2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004263104s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-123000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-123000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-123000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-123000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [62c2f9f6-b437-489f-9cd6-12885830ac63] Pending
helpers_test.go:344: "sp-pod" [62c2f9f6-b437-489f-9cd6-12885830ac63] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [62c2f9f6-b437-489f-9cd6-12885830ac63] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003224099s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-123000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-123000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-123000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f12be607-e2a1-464f-b83e-8ebe7f8f2fb8] Pending
helpers_test.go:344: "sp-pod" [f12be607-e2a1-464f-b83e-8ebe7f8f2fb8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f12be607-e2a1-464f-b83e-8ebe7f8f2fb8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007021591s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-123000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh -n functional-123000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cp functional-123000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1902240873/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh -n functional-123000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh -n functional-123000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-123000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-jtjbn" [88c66664-0009-4a8b-bec6-29fbf87bdcc5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-jtjbn" [88c66664-0009-4a8b-bec6-29fbf87bdcc5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.00314531s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-123000 exec mysql-6cdb49bbb-jtjbn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-123000 exec mysql-6cdb49bbb-jtjbn -- mysql -ppassword -e "show databases;": exit status 1 (159.210271ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-123000 exec mysql-6cdb49bbb-jtjbn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-123000 exec mysql-6cdb49bbb-jtjbn -- mysql -ppassword -e "show databases;": exit status 1 (104.427229ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-123000 exec mysql-6cdb49bbb-jtjbn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/8364/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /etc/test/nested/copy/8364/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/8364.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /etc/ssl/certs/8364.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/8364.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /usr/share/ca-certificates/8364.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/83642.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /etc/ssl/certs/83642.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/83642.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /usr/share/ca-certificates/83642.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-123000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "sudo systemctl is-active crio": exit status 1 (131.046303ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-123000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-123000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-123000 image ls --format short --alsologtostderr:
I0906 11:54:21.947142   10365 out.go:345] Setting OutFile to fd 1 ...
I0906 11:54:21.967258   10365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:21.967292   10365 out.go:358] Setting ErrFile to fd 2...
I0906 11:54:21.967301   10365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:21.967629   10365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
I0906 11:54:21.988759   10365 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:21.988922   10365 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:21.989396   10365 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:21.989467   10365 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:21.998532   10365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55215
I0906 11:54:21.998992   10365 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:21.999444   10365 main.go:141] libmachine: Using API Version  1
I0906 11:54:21.999454   10365 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:21.999716   10365 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:21.999847   10365 main.go:141] libmachine: (functional-123000) Calling .GetState
I0906 11:54:21.999948   10365 main.go:141] libmachine: (functional-123000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0906 11:54:22.000016   10365 main.go:141] libmachine: (functional-123000) DBG | hyperkit pid from json: 9674
I0906 11:54:22.001297   10365 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.001321   10365 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.009776   10365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55217
I0906 11:54:22.010139   10365 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.010487   10365 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.010496   10365 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.010736   10365 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.010843   10365 main.go:141] libmachine: (functional-123000) Calling .DriverName
I0906 11:54:22.011016   10365 ssh_runner.go:195] Run: systemctl --version
I0906 11:54:22.011034   10365 main.go:141] libmachine: (functional-123000) Calling .GetSSHHostname
I0906 11:54:22.011127   10365 main.go:141] libmachine: (functional-123000) Calling .GetSSHPort
I0906 11:54:22.011241   10365 main.go:141] libmachine: (functional-123000) Calling .GetSSHKeyPath
I0906 11:54:22.011333   10365 main.go:141] libmachine: (functional-123000) Calling .GetSSHUsername
I0906 11:54:22.011416   10365 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/functional-123000/id_rsa Username:docker}
I0906 11:54:22.042423   10365 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 11:54:22.062284   10365 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.062293   10365 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.062451   10365 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.062461   10365 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:22.062472   10365 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.062473   10365 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.062479   10365 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.062656   10365 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.062690   10365 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.062710   10365 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/minikube-local-cache-test | functional-123000 | 93766f391c72f | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-123000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-123000 image ls --format table --alsologtostderr:
I0906 11:54:22.243199   10379 out.go:345] Setting OutFile to fd 1 ...
I0906 11:54:22.245325   10379 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.245334   10379 out.go:358] Setting ErrFile to fd 2...
I0906 11:54:22.245339   10379 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.245538   10379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
I0906 11:54:22.246284   10379 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.246382   10379 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.246739   10379 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.246787   10379 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.255853   10379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55230
I0906 11:54:22.256293   10379 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.256767   10379 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.256802   10379 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.257072   10379 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.257215   10379 main.go:141] libmachine: (functional-123000) Calling .GetState
I0906 11:54:22.257332   10379 main.go:141] libmachine: (functional-123000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0906 11:54:22.257412   10379 main.go:141] libmachine: (functional-123000) DBG | hyperkit pid from json: 9674
I0906 11:54:22.258840   10379 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.258867   10379 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.268606   10379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55232
I0906 11:54:22.269034   10379 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.269386   10379 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.269397   10379 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.269607   10379 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.269753   10379 main.go:141] libmachine: (functional-123000) Calling .DriverName
I0906 11:54:22.269913   10379 ssh_runner.go:195] Run: systemctl --version
I0906 11:54:22.269930   10379 main.go:141] libmachine: (functional-123000) Calling .GetSSHHostname
I0906 11:54:22.270013   10379 main.go:141] libmachine: (functional-123000) Calling .GetSSHPort
I0906 11:54:22.270095   10379 main.go:141] libmachine: (functional-123000) Calling .GetSSHKeyPath
I0906 11:54:22.270175   10379 main.go:141] libmachine: (functional-123000) Calling .GetSSHUsername
I0906 11:54:22.270265   10379 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/functional-123000/id_rsa Username:docker}
I0906 11:54:22.301680   10379 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 11:54:22.320622   10379 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.320641   10379 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.320772   10379 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.320774   10379 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.320780   10379 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:22.320786   10379 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.320791   10379 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.320935   10379 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.320936   10379 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.320950   10379 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123000 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b
5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/
library/nginx:latest"],"size":"188000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"93766f391c72fa6bab0c275689a1a7b3c5dca4bc0714f1964ba45bc451929615","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-123000"],"size":"30"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":
"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-123000"],"size":"4940000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-123000 image ls --format json --alsologtostderr:
I0906 11:54:22.144918   10374 out.go:345] Setting OutFile to fd 1 ...
I0906 11:54:22.145183   10374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.145190   10374 out.go:358] Setting ErrFile to fd 2...
I0906 11:54:22.145194   10374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.145370   10374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
I0906 11:54:22.145934   10374 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.146025   10374 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.146377   10374 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.146426   10374 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.155066   10374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55225
I0906 11:54:22.155513   10374 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.155957   10374 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.155967   10374 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.156196   10374 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.156311   10374 main.go:141] libmachine: (functional-123000) Calling .GetState
I0906 11:54:22.156394   10374 main.go:141] libmachine: (functional-123000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0906 11:54:22.156483   10374 main.go:141] libmachine: (functional-123000) DBG | hyperkit pid from json: 9674
I0906 11:54:22.157737   10374 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.157760   10374 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.166433   10374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55227
I0906 11:54:22.166769   10374 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.167122   10374 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.167140   10374 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.167390   10374 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.167521   10374 main.go:141] libmachine: (functional-123000) Calling .DriverName
I0906 11:54:22.167678   10374 ssh_runner.go:195] Run: systemctl --version
I0906 11:54:22.167695   10374 main.go:141] libmachine: (functional-123000) Calling .GetSSHHostname
I0906 11:54:22.167777   10374 main.go:141] libmachine: (functional-123000) Calling .GetSSHPort
I0906 11:54:22.167886   10374 main.go:141] libmachine: (functional-123000) Calling .GetSSHKeyPath
I0906 11:54:22.167972   10374 main.go:141] libmachine: (functional-123000) Calling .GetSSHUsername
I0906 11:54:22.168051   10374 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/functional-123000/id_rsa Username:docker}
I0906 11:54:22.197984   10374 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 11:54:22.219578   10374 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.219586   10374 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.219800   10374 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.219803   10374 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.219813   10374 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:22.219821   10374 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.219827   10374 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.220150   10374 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.220161   10374 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:22.220184   10374 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-123000 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 93766f391c72fa6bab0c275689a1a7b3c5dca4bc0714f1964ba45bc451929615
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-123000
size: "30"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-123000
size: "4940000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-123000 image ls --format yaml --alsologtostderr:
I0906 11:54:22.070634   10370 out.go:345] Setting OutFile to fd 1 ...
I0906 11:54:22.086305   10370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.086321   10370 out.go:358] Setting ErrFile to fd 2...
I0906 11:54:22.086328   10370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.086587   10370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
I0906 11:54:22.087442   10370 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.087579   10370 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.087969   10370 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.088020   10370 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.097234   10370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55220
I0906 11:54:22.097677   10370 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.098111   10370 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.098142   10370 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.098434   10370 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.098564   10370 main.go:141] libmachine: (functional-123000) Calling .GetState
I0906 11:54:22.098647   10370 main.go:141] libmachine: (functional-123000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0906 11:54:22.098724   10370 main.go:141] libmachine: (functional-123000) DBG | hyperkit pid from json: 9674
I0906 11:54:22.099992   10370 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.100018   10370 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.108630   10370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55222
I0906 11:54:22.109028   10370 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.109382   10370 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.109395   10370 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.109658   10370 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.109815   10370 main.go:141] libmachine: (functional-123000) Calling .DriverName
I0906 11:54:22.109984   10370 ssh_runner.go:195] Run: systemctl --version
I0906 11:54:22.110002   10370 main.go:141] libmachine: (functional-123000) Calling .GetSSHHostname
I0906 11:54:22.110086   10370 main.go:141] libmachine: (functional-123000) Calling .GetSSHPort
I0906 11:54:22.110180   10370 main.go:141] libmachine: (functional-123000) Calling .GetSSHKeyPath
I0906 11:54:22.110288   10370 main.go:141] libmachine: (functional-123000) Calling .GetSSHUsername
I0906 11:54:22.110386   10370 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/functional-123000/id_rsa Username:docker}
I0906 11:54:22.142432   10370 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0906 11:54:22.161449   10370 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.161458   10370 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.161601   10370 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.161610   10370 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:22.161618   10370 main.go:141] libmachine: Making call to close driver server
I0906 11:54:22.161623   10370 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.161625   10370 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:22.161792   10370 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:22.161793   10370 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:22.161800   10370 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh pgrep buildkitd: exit status 1 (127.4904ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image build -t localhost/my-image:functional-123000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-123000 image build -t localhost/my-image:functional-123000 testdata/build --alsologtostderr: (1.996704451s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-123000 image build -t localhost/my-image:functional-123000 testdata/build --alsologtostderr:
I0906 11:54:22.431391   10388 out.go:345] Setting OutFile to fd 1 ...
I0906 11:54:22.432289   10388 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.432296   10388 out.go:358] Setting ErrFile to fd 2...
I0906 11:54:22.432300   10388 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 11:54:22.432463   10388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
I0906 11:54:22.433002   10388 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.433629   10388 config.go:182] Loaded profile config "functional-123000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 11:54:22.433966   10388 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.434008   10388 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.442416   10388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55242
I0906 11:54:22.442798   10388 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.443209   10388 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.443219   10388 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.443450   10388 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.443572   10388 main.go:141] libmachine: (functional-123000) Calling .GetState
I0906 11:54:22.443650   10388 main.go:141] libmachine: (functional-123000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0906 11:54:22.443734   10388 main.go:141] libmachine: (functional-123000) DBG | hyperkit pid from json: 9674
I0906 11:54:22.444975   10388 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0906 11:54:22.444997   10388 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0906 11:54:22.453403   10388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55244
I0906 11:54:22.453774   10388 main.go:141] libmachine: () Calling .GetVersion
I0906 11:54:22.454136   10388 main.go:141] libmachine: Using API Version  1
I0906 11:54:22.454151   10388 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 11:54:22.454373   10388 main.go:141] libmachine: () Calling .GetMachineName
I0906 11:54:22.454475   10388 main.go:141] libmachine: (functional-123000) Calling .DriverName
I0906 11:54:22.454626   10388 ssh_runner.go:195] Run: systemctl --version
I0906 11:54:22.454644   10388 main.go:141] libmachine: (functional-123000) Calling .GetSSHHostname
I0906 11:54:22.454720   10388 main.go:141] libmachine: (functional-123000) Calling .GetSSHPort
I0906 11:54:22.454798   10388 main.go:141] libmachine: (functional-123000) Calling .GetSSHKeyPath
I0906 11:54:22.454880   10388 main.go:141] libmachine: (functional-123000) Calling .GetSSHUsername
I0906 11:54:22.454962   10388 sshutil.go:53] new ssh client: &{IP:192.169.0.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/functional-123000/id_rsa Username:docker}
I0906 11:54:22.484977   10388 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3077671674.tar
I0906 11:54:22.485051   10388 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 11:54:22.492826   10388 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3077671674.tar
I0906 11:54:22.496362   10388 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3077671674.tar: stat -c "%s %y" /var/lib/minikube/build/build.3077671674.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3077671674.tar': No such file or directory
I0906 11:54:22.496398   10388 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3077671674.tar --> /var/lib/minikube/build/build.3077671674.tar (3072 bytes)
I0906 11:54:22.517606   10388 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3077671674
I0906 11:54:22.525481   10388 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3077671674 -xf /var/lib/minikube/build/build.3077671674.tar
I0906 11:54:22.533538   10388 docker.go:360] Building image: /var/lib/minikube/build/build.3077671674
I0906 11:54:22.533601   10388 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-123000 /var/lib/minikube/build/build.3077671674
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0b941cbc7dfc1f3292a6cbc7e83eea72a263df5566b8cd4fd13259d8a0e1ad88 done
#8 naming to localhost/my-image:functional-123000 done
#8 DONE 0.0s
I0906 11:54:24.329240   10388 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-123000 /var/lib/minikube/build/build.3077671674: (1.79562077s)
I0906 11:54:24.329297   10388 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3077671674
I0906 11:54:24.337019   10388 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3077671674.tar
I0906 11:54:24.345107   10388 build_images.go:217] Built localhost/my-image:functional-123000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3077671674.tar
I0906 11:54:24.345134   10388 build_images.go:133] succeeded building to: functional-123000
I0906 11:54:24.345139   10388 build_images.go:134] failed building to: 
I0906 11:54:24.345154   10388 main.go:141] libmachine: Making call to close driver server
I0906 11:54:24.345160   10388 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:24.345301   10388 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
I0906 11:54:24.345309   10388 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:24.345317   10388 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:24.345331   10388 main.go:141] libmachine: Making call to close driver server
I0906 11:54:24.345345   10388 main.go:141] libmachine: (functional-123000) Calling .Close
I0906 11:54:24.345518   10388 main.go:141] libmachine: Successfully made call to close driver server
I0906 11:54:24.345561   10388 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 11:54:24.345586   10388 main.go:141] libmachine: (functional-123000) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.66256554s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-123000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123000 docker-env) && out/minikube-darwin-amd64 status -p functional-123000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-123000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 update-context --alsologtostderr -v=2
2024/09/06 11:54:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image load --daemon kicbase/echo-server:functional-123000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image load --daemon kicbase/echo-server:functional-123000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-123000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image load --daemon kicbase/echo-server:functional-123000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image save kicbase/echo-server:functional-123000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image rm kicbase/echo-server:functional-123000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-123000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 image save --daemon kicbase/echo-server:functional-123000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-123000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-123000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-123000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-pknts" [9a5d92cd-1415-4915-b1b7-1c77ead1dbd9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0906 11:53:27.232913    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-6b9f76b5c7-pknts" [9a5d92cd-1415-4915-b1b7-1c77ead1dbd9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.004247755s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 service list -o json
functional_test.go:1494: Took "181.972412ms" to run "out/minikube-darwin-amd64 -p functional-123000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.169.0.23:32422
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.169.0.23:32422
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-123000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-123000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-123000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10078: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-123000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-123000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-123000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4c297495-a391-4141-b7c1-3496d034d226] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4c297495-a391-4141-b7c1-3496d034d226] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.002945703s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-123000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.4.75 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-123000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "182.739165ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "81.112979ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "184.224327ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "84.826057ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4079383357/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725648839526642000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4079383357/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725648839526642000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4079383357/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725648839526642000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4079383357/001/test-1725648839526642000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (152.973818ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 18:53 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 18:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 18:53 test-1725648839526642000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh cat /mount-9p/test-1725648839526642000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-123000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d3613471-d798-4c50-97f9-2128fc57bf4b] Pending
helpers_test.go:344: "busybox-mount" [d3613471-d798-4c50-97f9-2128fc57bf4b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d3613471-d798-4c50-97f9-2128fc57bf4b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d3613471-d798-4c50-97f9-2128fc57bf4b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005460016s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-123000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4079383357/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup953733656/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup953733656/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup953733656/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount1: exit status 1 (178.550665ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount1: exit status 1 (195.863753ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-123000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup953733656/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup953733656/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup953733656/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-123000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-123000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-123000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-343000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-343000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m12.978896803s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (193.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-343000 -- rollout status deployment/busybox: (2.600264868s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-2kj2b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-jk74s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-x6w7h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-2kj2b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-jk74s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-x6w7h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-2kj2b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-jk74s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-x6w7h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-2kj2b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-2kj2b -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-jk74s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-jk74s -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-x6w7h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-343000 -- exec busybox-7dff88458-x6w7h -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-343000 -v=7 --alsologtostderr
E0906 11:57:59.511665    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.444636    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.452021    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.463392    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.485240    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.528457    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.611228    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:13.773650    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:14.096735    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:14.738056    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:16.019364    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:18.581851    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:23.704442    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 11:58:33.946926    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-343000 -v=7 --alsologtostderr: (52.903437042s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-343000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp testdata/cp-test.txt ha-343000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000:/home/docker/cp-test.txt ha-343000-m02:/home/docker/cp-test_ha-343000_ha-343000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test_ha-343000_ha-343000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000:/home/docker/cp-test.txt ha-343000-m03:/home/docker/cp-test_ha-343000_ha-343000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test_ha-343000_ha-343000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000:/home/docker/cp-test.txt ha-343000-m04:/home/docker/cp-test_ha-343000_ha-343000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test_ha-343000_ha-343000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp testdata/cp-test.txt ha-343000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m02:/home/docker/cp-test.txt ha-343000:/home/docker/cp-test_ha-343000-m02_ha-343000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test_ha-343000-m02_ha-343000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m02:/home/docker/cp-test.txt ha-343000-m03:/home/docker/cp-test_ha-343000-m02_ha-343000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test_ha-343000-m02_ha-343000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m02:/home/docker/cp-test.txt ha-343000-m04:/home/docker/cp-test_ha-343000-m02_ha-343000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test_ha-343000-m02_ha-343000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp testdata/cp-test.txt ha-343000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt ha-343000:/home/docker/cp-test_ha-343000-m03_ha-343000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test_ha-343000-m03_ha-343000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt ha-343000-m02:/home/docker/cp-test_ha-343000-m03_ha-343000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test_ha-343000-m03_ha-343000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m03:/home/docker/cp-test.txt ha-343000-m04:/home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test_ha-343000-m03_ha-343000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp testdata/cp-test.txt ha-343000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1095676363/001/cp-test_ha-343000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt ha-343000:/home/docker/cp-test_ha-343000-m04_ha-343000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000 "sudo cat /home/docker/cp-test_ha-343000-m04_ha-343000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt ha-343000-m02:/home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m02 "sudo cat /home/docker/cp-test_ha-343000-m04_ha-343000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 cp ha-343000-m04:/home/docker/cp-test.txt ha-343000-m03:/home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 ssh -n ha-343000-m03 "sudo cat /home/docker/cp-test_ha-343000-m04_ha-343000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 node stop m02 -v=7 --alsologtostderr
E0906 11:58:54.429200    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 node stop m02 -v=7 --alsologtostderr: (8.344528545s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr: exit status 7 (363.283329ms)

                                                
                                                
-- stdout --
	ha-343000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-343000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-343000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-343000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 11:59:00.825190   10890 out.go:345] Setting OutFile to fd 1 ...
	I0906 11:59:00.825403   10890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:59:00.825409   10890 out.go:358] Setting ErrFile to fd 2...
	I0906 11:59:00.825413   10890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 11:59:00.825587   10890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 11:59:00.825796   10890 out.go:352] Setting JSON to false
	I0906 11:59:00.825820   10890 mustload.go:65] Loading cluster: ha-343000
	I0906 11:59:00.825857   10890 notify.go:220] Checking for updates...
	I0906 11:59:00.826134   10890 config.go:182] Loaded profile config "ha-343000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 11:59:00.826150   10890 status.go:255] checking status of ha-343000 ...
	I0906 11:59:00.826516   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.826579   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.835881   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55979
	I0906 11:59:00.836366   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.836795   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.836812   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.837004   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.837112   10890 main.go:141] libmachine: (ha-343000) Calling .GetState
	I0906 11:59:00.837190   10890 main.go:141] libmachine: (ha-343000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:59:00.837270   10890 main.go:141] libmachine: (ha-343000) DBG | hyperkit pid from json: 10421
	I0906 11:59:00.838320   10890 status.go:330] ha-343000 host status = "Running" (err=<nil>)
	I0906 11:59:00.838340   10890 host.go:66] Checking if "ha-343000" exists ...
	I0906 11:59:00.838591   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.838613   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.847147   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55981
	I0906 11:59:00.847516   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.847836   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.847854   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.848072   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.848172   10890 main.go:141] libmachine: (ha-343000) Calling .GetIP
	I0906 11:59:00.848261   10890 host.go:66] Checking if "ha-343000" exists ...
	I0906 11:59:00.848512   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.848537   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.861356   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55983
	I0906 11:59:00.861711   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.862040   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.862057   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.862250   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.862360   10890 main.go:141] libmachine: (ha-343000) Calling .DriverName
	I0906 11:59:00.862488   10890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 11:59:00.862508   10890 main.go:141] libmachine: (ha-343000) Calling .GetSSHHostname
	I0906 11:59:00.862611   10890 main.go:141] libmachine: (ha-343000) Calling .GetSSHPort
	I0906 11:59:00.862693   10890 main.go:141] libmachine: (ha-343000) Calling .GetSSHKeyPath
	I0906 11:59:00.862781   10890 main.go:141] libmachine: (ha-343000) Calling .GetSSHUsername
	I0906 11:59:00.862857   10890 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000/id_rsa Username:docker}
	I0906 11:59:00.899650   10890 ssh_runner.go:195] Run: systemctl --version
	I0906 11:59:00.904437   10890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:59:00.916243   10890 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 11:59:00.916270   10890 api_server.go:166] Checking apiserver status ...
	I0906 11:59:00.916311   10890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:59:00.929123   10890 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0906 11:59:00.937516   10890 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 11:59:00.937571   10890 ssh_runner.go:195] Run: ls
	I0906 11:59:00.940813   10890 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0906 11:59:00.944021   10890 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0906 11:59:00.944033   10890 status.go:422] ha-343000 apiserver status = Running (err=<nil>)
	I0906 11:59:00.944046   10890 status.go:257] ha-343000 status: &{Name:ha-343000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 11:59:00.944058   10890 status.go:255] checking status of ha-343000-m02 ...
	I0906 11:59:00.944326   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.944349   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.953054   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55987
	I0906 11:59:00.953398   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.953712   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.953731   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.954005   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.954141   10890 main.go:141] libmachine: (ha-343000-m02) Calling .GetState
	I0906 11:59:00.954224   10890 main.go:141] libmachine: (ha-343000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:59:00.954296   10890 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid from json: 10441
	I0906 11:59:00.955268   10890 main.go:141] libmachine: (ha-343000-m02) DBG | hyperkit pid 10441 missing from process table
	I0906 11:59:00.955311   10890 status.go:330] ha-343000-m02 host status = "Stopped" (err=<nil>)
	I0906 11:59:00.955322   10890 status.go:343] host is not running, skipping remaining checks
	I0906 11:59:00.955329   10890 status.go:257] ha-343000-m02 status: &{Name:ha-343000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 11:59:00.955340   10890 status.go:255] checking status of ha-343000-m03 ...
	I0906 11:59:00.955602   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.955624   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.964104   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55989
	I0906 11:59:00.964495   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.964842   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.964854   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.965075   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.965192   10890 main.go:141] libmachine: (ha-343000-m03) Calling .GetState
	I0906 11:59:00.965273   10890 main.go:141] libmachine: (ha-343000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:59:00.965358   10890 main.go:141] libmachine: (ha-343000-m03) DBG | hyperkit pid from json: 10460
	I0906 11:59:00.966376   10890 status.go:330] ha-343000-m03 host status = "Running" (err=<nil>)
	I0906 11:59:00.966384   10890 host.go:66] Checking if "ha-343000-m03" exists ...
	I0906 11:59:00.966631   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.966656   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.975023   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55991
	I0906 11:59:00.975398   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.975768   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.975783   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.976016   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.976129   10890 main.go:141] libmachine: (ha-343000-m03) Calling .GetIP
	I0906 11:59:00.976224   10890 host.go:66] Checking if "ha-343000-m03" exists ...
	I0906 11:59:00.976500   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:00.976533   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:00.984957   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55993
	I0906 11:59:00.985323   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:00.985661   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:00.985674   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:00.985906   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:00.986032   10890 main.go:141] libmachine: (ha-343000-m03) Calling .DriverName
	I0906 11:59:00.986162   10890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 11:59:00.986173   10890 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHHostname
	I0906 11:59:00.986266   10890 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHPort
	I0906 11:59:00.986356   10890 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHKeyPath
	I0906 11:59:00.986427   10890 main.go:141] libmachine: (ha-343000-m03) Calling .GetSSHUsername
	I0906 11:59:00.986505   10890 sshutil.go:53] new ssh client: &{IP:192.169.0.26 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m03/id_rsa Username:docker}
	I0906 11:59:01.017314   10890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:59:01.029023   10890 kubeconfig.go:125] found "ha-343000" server: "https://192.169.0.254:8443"
	I0906 11:59:01.029036   10890 api_server.go:166] Checking apiserver status ...
	I0906 11:59:01.029079   10890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 11:59:01.040010   10890 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1855/cgroup
	W0906 11:59:01.048120   10890 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1855/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 11:59:01.048192   10890 ssh_runner.go:195] Run: ls
	I0906 11:59:01.051635   10890 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0906 11:59:01.054712   10890 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0906 11:59:01.054724   10890 status.go:422] ha-343000-m03 apiserver status = Running (err=<nil>)
	I0906 11:59:01.054738   10890 status.go:257] ha-343000-m03 status: &{Name:ha-343000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 11:59:01.054749   10890 status.go:255] checking status of ha-343000-m04 ...
	I0906 11:59:01.055022   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:01.055044   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:01.063744   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55997
	I0906 11:59:01.064131   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:01.064453   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:01.064465   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:01.064691   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:01.064808   10890 main.go:141] libmachine: (ha-343000-m04) Calling .GetState
	I0906 11:59:01.064894   10890 main.go:141] libmachine: (ha-343000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 11:59:01.064989   10890 main.go:141] libmachine: (ha-343000-m04) DBG | hyperkit pid from json: 10558
	I0906 11:59:01.066014   10890 status.go:330] ha-343000-m04 host status = "Running" (err=<nil>)
	I0906 11:59:01.066024   10890 host.go:66] Checking if "ha-343000-m04" exists ...
	I0906 11:59:01.066266   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:01.066296   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:01.074777   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55999
	I0906 11:59:01.075135   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:01.075451   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:01.075461   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:01.075667   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:01.075771   10890 main.go:141] libmachine: (ha-343000-m04) Calling .GetIP
	I0906 11:59:01.075853   10890 host.go:66] Checking if "ha-343000-m04" exists ...
	I0906 11:59:01.076095   10890 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 11:59:01.076117   10890 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 11:59:01.084548   10890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56001
	I0906 11:59:01.084901   10890 main.go:141] libmachine: () Calling .GetVersion
	I0906 11:59:01.085224   10890 main.go:141] libmachine: Using API Version  1
	I0906 11:59:01.085235   10890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 11:59:01.085457   10890 main.go:141] libmachine: () Calling .GetMachineName
	I0906 11:59:01.085582   10890 main.go:141] libmachine: (ha-343000-m04) Calling .DriverName
	I0906 11:59:01.085743   10890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 11:59:01.085755   10890 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHHostname
	I0906 11:59:01.085836   10890 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHPort
	I0906 11:59:01.085921   10890 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHKeyPath
	I0906 11:59:01.086032   10890 main.go:141] libmachine: (ha-343000-m04) Calling .GetSSHUsername
	I0906 11:59:01.086117   10890 sshutil.go:53] new ssh client: &{IP:192.169.0.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/ha-343000-m04/id_rsa Username:docker}
	I0906 11:59:01.120957   10890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 11:59:01.131401   10890 status.go:257] ha-343000-m04 status: &{Name:ha-343000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 node start m02 -v=7 --alsologtostderr
E0906 11:59:35.391553    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-343000 node start m02 -v=7 --alsologtostderr: (44.166976191s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-343000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (37.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-712000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-712000 --driver=hyperkit : (37.60666698s)
--- PASS: TestImageBuild/serial/Setup (37.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-712000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-712000: (1.655409973s)
--- PASS: TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-712000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-712000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-712000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-594000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-594000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (48.519194495s)
--- PASS: TestJSONOutput/start/Command (48.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-594000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-594000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-594000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-594000 --output=json --user=testUser: (8.327429137s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-084000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-084000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (362.399708ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f445b76e-e3bb-4809-a5be-30d87227a1c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-084000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b8100f3-8f91-447f-b61f-9f13de417074","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"f0d65204-c217-4756-8666-612ef924b638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig"}}
	{"specversion":"1.0","id":"ead86ece-682e-4e58-9f20-fc41f70445dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"db369c96-561e-4586-bfee-e0d39553cab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f258139-b641-4196-98b7-648d5ab09092","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube"}}
	{"specversion":"1.0","id":"e397fdf7-02f7-40ff-b476-e6063b94a647","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87e54781-0503-407a-8ef5-c8776d9a7e66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-084000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-084000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (86.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-737000 --driver=hyperkit 
E0906 12:12:59.515739    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-737000 --driver=hyperkit : (37.584381093s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-739000 --driver=hyperkit 
E0906 12:13:13.448519    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-739000 --driver=hyperkit : (39.21225302s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-737000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-739000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-739000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-739000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-739000: (3.3927557s)
helpers_test.go:175: Cleaning up "first-737000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-737000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-737000: (5.235476534s)
--- PASS: TestMinikubeProfile (86.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-459000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0906 12:17:59.513735    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-459000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m46.18819234s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-459000 -- rollout status deployment/busybox: (3.556129759s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-b9hnk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-m65s6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-b9hnk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-m65s6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-b9hnk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-m65s6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-b9hnk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-b9hnk -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-m65s6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-459000 -- exec busybox-7dff88458-m65s6 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-459000 -v 3 --alsologtostderr
E0906 12:18:13.448108    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-459000 -v 3 --alsologtostderr: (45.013982152s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-459000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp testdata/cp-test.txt multinode-459000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000:/home/docker/cp-test.txt multinode-459000-m02:/home/docker/cp-test_multinode-459000_multinode-459000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m02 "sudo cat /home/docker/cp-test_multinode-459000_multinode-459000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000:/home/docker/cp-test.txt multinode-459000-m03:/home/docker/cp-test_multinode-459000_multinode-459000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m03 "sudo cat /home/docker/cp-test_multinode-459000_multinode-459000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp testdata/cp-test.txt multinode-459000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt multinode-459000:/home/docker/cp-test_multinode-459000-m02_multinode-459000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000 "sudo cat /home/docker/cp-test_multinode-459000-m02_multinode-459000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000-m02:/home/docker/cp-test.txt multinode-459000-m03:/home/docker/cp-test_multinode-459000-m02_multinode-459000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m03 "sudo cat /home/docker/cp-test_multinode-459000-m02_multinode-459000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp testdata/cp-test.txt multinode-459000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile578296277/001/cp-test_multinode-459000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt multinode-459000:/home/docker/cp-test_multinode-459000-m03_multinode-459000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000 "sudo cat /home/docker/cp-test_multinode-459000-m03_multinode-459000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 cp multinode-459000-m03:/home/docker/cp-test.txt multinode-459000-m02:/home/docker/cp-test_multinode-459000-m03_multinode-459000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 ssh -n multinode-459000-m02 "sudo cat /home/docker/cp-test_multinode-459000-m03_multinode-459000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-459000 node stop m03: (2.335755382s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-459000 status: exit status 7 (249.36783ms)

                                                
                                                
-- stdout --
	multinode-459000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-459000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-459000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr: exit status 7 (246.936862ms)

                                                
                                                
-- stdout --
	multinode-459000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-459000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-459000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:19:03.586239   13057 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:19:03.586507   13057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:19:03.586514   13057 out.go:358] Setting ErrFile to fd 2...
	I0906 12:19:03.586518   13057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:19:03.586714   13057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:19:03.586900   13057 out.go:352] Setting JSON to false
	I0906 12:19:03.586923   13057 mustload.go:65] Loading cluster: multinode-459000
	I0906 12:19:03.586964   13057 notify.go:220] Checking for updates...
	I0906 12:19:03.587218   13057 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:19:03.587234   13057 status.go:255] checking status of multinode-459000 ...
	I0906 12:19:03.587605   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.587651   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.596744   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57394
	I0906 12:19:03.597221   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.597665   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.597677   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.597888   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.598011   13057 main.go:141] libmachine: (multinode-459000) Calling .GetState
	I0906 12:19:03.598085   13057 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:19:03.598163   13057 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 12754
	I0906 12:19:03.599369   13057 status.go:330] multinode-459000 host status = "Running" (err=<nil>)
	I0906 12:19:03.599388   13057 host.go:66] Checking if "multinode-459000" exists ...
	I0906 12:19:03.599625   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.599645   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.608150   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57396
	I0906 12:19:03.608509   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.608889   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.608905   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.609113   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.609232   13057 main.go:141] libmachine: (multinode-459000) Calling .GetIP
	I0906 12:19:03.609315   13057 host.go:66] Checking if "multinode-459000" exists ...
	I0906 12:19:03.609562   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.609588   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.618207   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57398
	I0906 12:19:03.618539   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.618867   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.618883   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.619096   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.619201   13057 main.go:141] libmachine: (multinode-459000) Calling .DriverName
	I0906 12:19:03.619364   13057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:19:03.619392   13057 main.go:141] libmachine: (multinode-459000) Calling .GetSSHHostname
	I0906 12:19:03.619474   13057 main.go:141] libmachine: (multinode-459000) Calling .GetSSHPort
	I0906 12:19:03.619557   13057 main.go:141] libmachine: (multinode-459000) Calling .GetSSHKeyPath
	I0906 12:19:03.619646   13057 main.go:141] libmachine: (multinode-459000) Calling .GetSSHUsername
	I0906 12:19:03.619727   13057 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000/id_rsa Username:docker}
	I0906 12:19:03.653516   13057 ssh_runner.go:195] Run: systemctl --version
	I0906 12:19:03.657745   13057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:19:03.668274   13057 kubeconfig.go:125] found "multinode-459000" server: "https://192.169.0.33:8443"
	I0906 12:19:03.668299   13057 api_server.go:166] Checking apiserver status ...
	I0906 12:19:03.668338   13057 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 12:19:03.679339   13057 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1876/cgroup
	W0906 12:19:03.686851   13057 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1876/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 12:19:03.686898   13057 ssh_runner.go:195] Run: ls
	I0906 12:19:03.690242   13057 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0906 12:19:03.693178   13057 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0906 12:19:03.693190   13057 status.go:422] multinode-459000 apiserver status = Running (err=<nil>)
	I0906 12:19:03.693206   13057 status.go:257] multinode-459000 status: &{Name:multinode-459000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:19:03.693217   13057 status.go:255] checking status of multinode-459000-m02 ...
	I0906 12:19:03.693457   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.693480   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.702239   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57402
	I0906 12:19:03.702590   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.702952   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.702968   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.703160   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.703277   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .GetState
	I0906 12:19:03.703364   13057 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:19:03.703432   13057 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 12773
	I0906 12:19:03.704628   13057 status.go:330] multinode-459000-m02 host status = "Running" (err=<nil>)
	I0906 12:19:03.704636   13057 host.go:66] Checking if "multinode-459000-m02" exists ...
	I0906 12:19:03.704897   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.704930   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.713698   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57404
	I0906 12:19:03.714073   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.714417   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.714430   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.714620   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.714724   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .GetIP
	I0906 12:19:03.714808   13057 host.go:66] Checking if "multinode-459000-m02" exists ...
	I0906 12:19:03.715055   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.715075   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.723585   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57406
	I0906 12:19:03.723923   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.724289   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.724304   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.724506   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.724607   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .DriverName
	I0906 12:19:03.724729   13057 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 12:19:03.724740   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHHostname
	I0906 12:19:03.724815   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHPort
	I0906 12:19:03.724888   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHKeyPath
	I0906 12:19:03.724973   13057 main.go:141] libmachine: (multinode-459000-m02) Calling .GetSSHUsername
	I0906 12:19:03.725056   13057 sshutil.go:53] new ssh client: &{IP:192.169.0.34 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19576-7784/.minikube/machines/multinode-459000-m02/id_rsa Username:docker}
	I0906 12:19:03.752638   13057 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 12:19:03.763513   13057 status.go:257] multinode-459000-m02 status: &{Name:multinode-459000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:19:03.763533   13057 status.go:255] checking status of multinode-459000-m03 ...
	I0906 12:19:03.763794   13057 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:19:03.763818   13057 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:19:03.772483   13057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57409
	I0906 12:19:03.772842   13057 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:19:03.773200   13057 main.go:141] libmachine: Using API Version  1
	I0906 12:19:03.773214   13057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:19:03.773450   13057 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:19:03.773573   13057 main.go:141] libmachine: (multinode-459000-m03) Calling .GetState
	I0906 12:19:03.773697   13057 main.go:141] libmachine: (multinode-459000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:19:03.773786   13057 main.go:141] libmachine: (multinode-459000-m03) DBG | hyperkit pid from json: 12846
	I0906 12:19:03.774949   13057 main.go:141] libmachine: (multinode-459000-m03) DBG | hyperkit pid 12846 missing from process table
	I0906 12:19:03.774983   13057 status.go:330] multinode-459000-m03 host status = "Stopped" (err=<nil>)
	I0906 12:19:03.774990   13057 status.go:343] host is not running, skipping remaining checks
	I0906 12:19:03.774997   13057 status.go:257] multinode-459000-m03 status: &{Name:multinode-459000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-459000 node start m03 -v=7 --alsologtostderr: (41.25914647s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-459000 stop: (16.611581022s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-459000 status: exit status 7 (82.555409ms)

                                                
                                                
-- stdout --
	multinode-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-459000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr: exit status 7 (81.314906ms)

                                                
                                                
-- stdout --
	multinode-459000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-459000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 12:26:02.093574   13241 out.go:345] Setting OutFile to fd 1 ...
	I0906 12:26:02.093823   13241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:26:02.093830   13241 out.go:358] Setting ErrFile to fd 2...
	I0906 12:26:02.093833   13241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 12:26:02.094006   13241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19576-7784/.minikube/bin
	I0906 12:26:02.094229   13241 out.go:352] Setting JSON to false
	I0906 12:26:02.094250   13241 mustload.go:65] Loading cluster: multinode-459000
	I0906 12:26:02.094287   13241 notify.go:220] Checking for updates...
	I0906 12:26:02.094543   13241 config.go:182] Loaded profile config "multinode-459000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 12:26:02.094558   13241 status.go:255] checking status of multinode-459000 ...
	I0906 12:26:02.094910   13241 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:26:02.094958   13241 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:26:02.103849   13241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57634
	I0906 12:26:02.104279   13241 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:26:02.104704   13241 main.go:141] libmachine: Using API Version  1
	I0906 12:26:02.104717   13241 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:26:02.104927   13241 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:26:02.105056   13241 main.go:141] libmachine: (multinode-459000) Calling .GetState
	I0906 12:26:02.105147   13241 main.go:141] libmachine: (multinode-459000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:26:02.105209   13241 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid from json: 13116
	I0906 12:26:02.106139   13241 main.go:141] libmachine: (multinode-459000) DBG | hyperkit pid 13116 missing from process table
	I0906 12:26:02.106154   13241 status.go:330] multinode-459000 host status = "Stopped" (err=<nil>)
	I0906 12:26:02.106164   13241 status.go:343] host is not running, skipping remaining checks
	I0906 12:26:02.106171   13241 status.go:257] multinode-459000 status: &{Name:multinode-459000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 12:26:02.106193   13241 status.go:255] checking status of multinode-459000-m02 ...
	I0906 12:26:02.106425   13241 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0906 12:26:02.106448   13241 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0906 12:26:02.114837   13241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57636
	I0906 12:26:02.115186   13241 main.go:141] libmachine: () Calling .GetVersion
	I0906 12:26:02.115507   13241 main.go:141] libmachine: Using API Version  1
	I0906 12:26:02.115537   13241 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 12:26:02.115761   13241 main.go:141] libmachine: () Calling .GetMachineName
	I0906 12:26:02.115864   13241 main.go:141] libmachine: (multinode-459000-m02) Calling .GetState
	I0906 12:26:02.115944   13241 main.go:141] libmachine: (multinode-459000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0906 12:26:02.116022   13241 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid from json: 13138
	I0906 12:26:02.116909   13241 main.go:141] libmachine: (multinode-459000-m02) DBG | hyperkit pid 13138 missing from process table
	I0906 12:26:02.116931   13241 status.go:330] multinode-459000-m02 host status = "Stopped" (err=<nil>)
	I0906 12:26:02.116937   13241 status.go:343] host is not running, skipping remaining checks
	I0906 12:26:02.116943   13241 status.go:257] multinode-459000-m02 status: &{Name:multinode-459000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (215.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-459000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0906 12:27:59.527962    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:28:13.460823    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-459000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (3m35.467620126s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-459000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (215.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-459000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-459000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-459000-m02 --driver=hyperkit : exit status 14 (433.351257ms)

                                                
                                                
-- stdout --
	* [multinode-459000-m02] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-459000-m02' is duplicated with machine name 'multinode-459000-m02' in profile 'multinode-459000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-459000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-459000-m03 --driver=hyperkit : (41.452336117s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-459000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-459000: exit status 80 (307.385887ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-459000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-459000-m03 already exists in multinode-459000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-459000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-459000-m03: (3.577389826s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.83s)

                                                
                                    
x
+
TestPreload (140.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-278000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0906 12:31:16.537804    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-278000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m12.76814415s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-278000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-278000 image pull gcr.io/k8s-minikube/busybox: (1.419639875s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-278000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-278000: (8.376664034s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-278000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-278000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (52.503985732s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-278000 image list
helpers_test.go:175: Cleaning up "test-preload-278000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-278000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-278000: (5.247869047s)
--- PASS: TestPreload (140.47s)

                                                
                                    
x
+
TestSkaffold (116.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4163603415 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4163603415 version: (1.799502508s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-042000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-042000 --memory=2600 --driver=hyperkit : (39.750428943s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4163603415 run --minikube-profile skaffold-042000 --kube-context skaffold-042000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4163603415 run --minikube-profile skaffold-042000 --kube-context skaffold-042000 --status-check=true --port-forward=false --interactive=false: (56.794880927s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-58f889d445-drx8t" [c201dc40-bf45-4681-89fa-d8ed9bac8aeb] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003734558s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5944554f9-8ldw8" [d526a6f5-e55e-43af-aff3-dbc693f4c15b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004572811s
helpers_test.go:175: Cleaning up "skaffold-042000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-042000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-042000: (5.2411154s)
--- PASS: TestSkaffold (116.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (92.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2142950973 start -p running-upgrade-606000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2142950973 start -p running-upgrade-606000 --memory=2200 --vm-driver=hyperkit : (1m2.115130433s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-606000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-606000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (22.885403205s)
helpers_test.go:175: Cleaning up "running-upgrade-606000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-606000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-606000: (5.23925224s)
--- PASS: TestRunningBinaryUpgrade (92.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (1348.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (54.939915021s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-382000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-382000: (2.373469434s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-382000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-382000 status --format={{.Host}}: exit status 7 (68.318598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0906 12:52:59.573108    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:53:13.506208    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:54:22.667416    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:56:59.967451    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:57:59.616974    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:58:13.549130    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 12:58:23.095588    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:02:00.014164    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:02:59.615764    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (10m44.040382432s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-382000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (511.97159ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-382000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-382000
	    minikube start -p kubernetes-upgrade-382000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3820002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-382000 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit 
E0906 13:03:13.549085    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:04:36.633994    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:07:00.013625    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:07:59.616185    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:08:13.550493    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:11:02.713310    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:12:00.011132    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:12:59.621722    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/addons-565000/client.crt: no such file or directory" logger="UnhandledError"
E0906 13:13:13.554864    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/functional-123000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-382000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=hyperkit : (10m40.902794978s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-382000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-382000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-382000: (5.270862444s)
--- PASS: TestKubernetesUpgrade (1348.16s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.15s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19576
- KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1979829106/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1979829106/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1979829106/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1979829106/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.986169223 start -p stopped-upgrade-671000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.986169223 start -p stopped-upgrade-671000 --memory=2200 --vm-driver=hyperkit : (40.31063848s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.986169223 -p stopped-upgrade-671000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.986169223 -p stopped-upgrade-671000 stop: (8.253397959s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-671000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0906 13:15:03.100831    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-671000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m15.118315s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-671000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-671000: (2.483851457s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-898000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-898000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (465.782925ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-898000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19576
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19576-7784/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19576-7784/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (71.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-898000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-898000 --driver=hyperkit : (1m11.452795268s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-898000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (71.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-898000 --no-kubernetes --driver=hyperkit 
E0906 13:17:00.015992    8364 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19576-7784/.minikube/profiles/skaffold-042000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-898000 --no-kubernetes --driver=hyperkit : (6.119652312s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-898000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-898000 status -o json: exit status 2 (154.458555ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-898000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-898000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-898000: (2.407836174s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-898000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-898000 --no-kubernetes --driver=hyperkit : (21.906212578s)
--- PASS: TestNoKubernetes/serial/Start (21.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-898000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-898000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (129.686417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-898000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-898000: (2.379981265s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-898000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-898000 --driver=hyperkit : (19.276760643s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-898000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-898000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (126.8178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    

Test skip (19/219)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port720398649/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.742743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.209992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (159.112671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (121.383905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (144.795217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (120.756962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (124.50813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-123000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-123000 ssh "sudo umount -f /mount-9p": exit status 1 (133.823371ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-123000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-123000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port720398649/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.08s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard